Linear algebra is probably the easiest and the most useful branch of modern mathematics. Indeed, topics such as matrices and linear equations are often taught in middle or high school. On the other hand, concepts and techniques from linear algebra underlie cutting-edge disciplines such as data science and quantum computation. And in the field of numerical analysis, everything is linear algebra!
Today, I am proud to announce a free interactive course, Introduction to Linear Algebra, that will help students all over the world to master this wonderful subject. The course uses the powerful functions for matrix operations in the Wolfram Language and addresses questions such as “How long would it take to solve a system of 500 linear equations?” or “How does data compression work?”
I invite you to start exploring the interactive course by clicking anywhere in the following image before reading the rest of this blog post.
The ancient Babylonians and Chinese knew how to solve simple systems of linear equations with two or three equations. However, the first systematic method for solving linear systems was given in 1750 by Gabriel Cramer, who formulated a rule for solving such systems using determinants. This was followed by the 1810 work of Carl Friedrich Gauss, who developed the technique known as Gaussian elimination for solving linear systems. Next, in 1850, James Joseph Sylvester introduced the notion of a matrix to represent arrays of numbers such as those formed by the coefficients of a linear system. A few years later, around 1855, Arthur Cayley published his work on the theory of matrices and operations on them. Finally, in 1888, Giuseppe Peano defined the notion of an abstract vector space, which plays a unifying role in modern linear algebra.
In keeping with the historical development, Introduction to Linear Algebra focuses on matrices and determinants, while vector spaces are discussed only when necessary during the course.
Students taking this course will receive a thorough introduction to linear algebra including standard topics, such as linear systems, geometric transformations, matrix operations, determinants and eigenvalues. The course also includes a few advanced topics, such as the singular value decomposition, one of the most valuable concepts in applied linear algebra. Here is a sneak peek at some of the topics in the course (shown in the left-hand column):
I have worked hard to keep the course down to a reasonable length, and I expect that you will finish watching the 29 videos and also complete the five short quizzes in about five hours. (Hence the title of the blog post!)
Finally, I assume that the students are familiar with high-school algebra and basic trigonometry, but no calculus is required for this course.
The next few sections of the blog post will describe the different components of the course in detail.
The heart of the course is a set of 25 lessons, beginning with “What Is Linear Algebra?”. This introductory lesson includes a discussion of the different approaches to linear algebra, followed by a brief history of the subject and an outline of the course. Here is a short excerpt from the video for this lesson:
Further lessons begin with an overview of the topic (for example, eigenvalues and eigenvectors), followed by a discussion of the key concepts interspersed with examples that illustrate the ideas using Wolfram Language functions for matrix computations.
The videos range from 7 to 12 minutes in length, and each video is accompanied by a transcript notebook displayed on the right-hand side of the screen. You can copy and paste Wolfram Language input directly from the transcript notebook to the embedded scratch notebook to try the examples for yourself.
Each lesson is accompanied by a set of five exercises to review the concepts covered during the lesson. Since this course is designed for independent study, a detailed solution is given for all exercises. The following shows an exercise from lesson 14 on applications of determinants:
The notebooks with the exercises are interactive, so students can try variations of each problem in the Wolfram Cloud. In particular, they are encouraged to change the entries of the matrices, vary the dimensions, etc. and experience the awesome power of the Wolfram Language for matrix computations.
Each section of the course includes an application session, which discusses a real-world application of the ideas developed in that section. There are four application sessions on polynomial interpolation, economic models, quantum entanglement and data compression, respectively. The following is a short excerpt from the video for data compression:
These application sessions celebrate the great success of linear algebra techniques in engineering, computer science and other fields. I would like to thank Roger Germundsson, director of R&D at Wolfram, who suggested their inclusion in the course.
Each section of the course ends with a short, multiple-choice quiz with five problems. The quiz problems are roughly at the same level as those discussed in the lessons, and a student who reviews the section carefully should have no difficulty in doing well on the quiz.
Students will receive instant feedback about their responses to the quiz questions, and they are encouraged to try any method (including hand or computer calculations) to solve them.
I strongly encourage students to watch all the lessons and attempt the quizzes in the recommended sequence, since each topic in the course relies on earlier concepts and techniques. You can request a certificate of completion, pictured here, at the end of the course. A course certificate is achieved after watching all the lesson and application videos and passing all the quizzes. It represents proficiency in the fundamentals of linear algebra and will add value to your resume or social media profile.
There is also an optional final exam at the end of the course. You can receive an advanced level of certification by passing this exam.
A mastery of the fundamental concepts of linear algebra is crucial for students of data science, engineering and other fields. I hope that Introduction to Linear Algebra will help you to achieve such a mastery and will drive you toward success in your chosen field. I have enjoyed teaching the course and welcome any comments regarding the current course as well as suggestions for future courses.
I would like to thank Vishaal Ganesan, Knarik Hovhannisyan, Veronica Mullen, Joyce Tracewell, Aram Manaselyan, Tim Shedelbower, Harry Calkins, Amruta Behera, Andy Hunt and Cassidy Hinkle for their dedicated work on various aspects (lessons, exercises, videos, etc.) of the course.
Need help with calculus? Register for Wolfram U’s daily calculus study group. |
The first half of 2020 has brought with it another exciting batch of publications. Wolfram Media has released Conrad Wolfram’s The Math(s) Fix. Keep an eye out for the upcoming third edition of Hands-on Start to Wolfram Mathematica later in 2020.
The Math(s) Fix: An Education Blueprint for the AI Age is a groundbreaking book that exposes why mathematics education is in crisis worldwide and how the only fix is a fundamentally new mainstream subject. Engaging and accessible yet deep and compelling, The Math(s) Fix argues that today’s math education isn’t working to elevate society with modern computation, data science and AI. Instead, students are subjugated to compete with what computers do best, and lose.
New books from other publishers include writings on advanced calculus, applied holography, quantum mechanics and more.
Written by Wolfram Summer School participant Hamza Alsamraee, Advanced Calculus Explored gives readers tools for success in their STEM courses. The author’s use of the Wolfram Language to explore famous equations, applications in a range of topics and a multitude of nonstandard problems helps readers—especially students in advanced mathematics and science courses—build a stronger, more intuitive understanding of calculus.
Originally presented as lectures given at the Indian Institute of Technology Madras (India) and at the Institute of Theoretical Physics Madrid (Spain), Matteo Baggioli’s new book is a concise and pragmatic course on applied holography. This primer focuses on analytic and numerical techniques, using Mathematica to detail computations and open-source numerical code. The author also shares tricks and techniques, supplementing concrete applications of AdS/CFT to hydrodynamics, quantum chromodynamics and condensed matter.
Authors Carlos A. Coelho and Barry C. Arnold present computational modules in Mathematica and other languages to guide readers to implement, plot and compute the distributions of test statistics, or any other statistics that fit into the general paradigm described. Exact quantiles and the exact p-values of likelihood ratio tests can be computed quickly and efficiently by researchers and graduate students implementing likelihood ratio tests in multivariate analysis, providing an explicit manageable finite form for the distribution of the test statistics.
Author Roman Schmied uses Mathematica to simulate many of the problems students encounter in introductory quantum mechanics. Computer implementations for finding and visualizing analytical and numerical solutions function as building blocks to solve more complex problems, such as coherent laser-driven dynamics in the Rubidium hyperfine structure or the Rashba interaction of an electron moving in 2D. This book is written in the Wolfram Language for its unparalleled ability to perform deep calculations, seamless mix of analytic and numerical facilities, built-in algorithms plus other libraries and the Wolfram Notebook interactive experience—but no prior knowledge of Mathematica is required.
]]>In his blog post announcing the launch of Mathematica Version 12.1, Stephen Wolfram mentioned the extensive updates to Dataset that we undertook to make it easier to explore, understand and present your data. Here is how the updated Dataset works and how you can use it to gain deeper insight into your data.
We have added items to Dataset column header context menus for sorting and reverse sorting your data:
If a Dataset has multiple levels of data, you can sort multiple columns simultaneously:
Sort row headers by hovering near the corner of the blank cell atop a row header column. When the menu indicator () appears, right-click it to bring up the context menu and choose a sort item:
Hide and Show items are also in the context menus of all Dataset cells, used to collapse parts of datasets for focused views of particular data:
Sorting and hiding give you interactive tools for exploring your data. With Dataset’s new formatting options, you can present your data in ways that make it easier to understand and spot patterns.
The following is a complete set of new Dataset options:
Alignment Background ItemSize ItemStyle |
Grid-like formatting for Dataset items |
HeaderAlignment HeaderBackground HeaderSize HeaderStyle |
Grid-like formatting for Dataset headers |
ItemDisplayFunction HeaderDisplayFunction |
complete control of item and header formatting |
HiddenItems |
which items are initially hidden |
MaxItems |
maximum number of items to display without a scrollbar or elision |
DatasetDisplayPanel |
initial drill-down position |
ScrollPosition |
initial scroll positions |
In the subsequent sections, I’ll explain the basic functions of these options and then do a deep dive into option value syntax. It lets you apply option values to Dataset data in tons of useful ways.
These options, familiar from Grid, now work in Dataset as well. Here is a dataset with default styling:
Here is the same Dataset with right-aligned ages, orange backgrounds and italic “children” entries (to change a Dataset’s options, wrap it with Dataset[...] and specify the new options):
✕
Dataset[Dataset[ Association[ "Deb" -> Association[ "age" -> 62, "sex" -> "female", "children" -> Association[ "Hal" -> Association["age" -> 29, "sex" -> "male"], "Kat" -> Association["age" -> 31, "sex" -> "female"]]], "Eva" -> Association[ "age" -> 43, "sex" -> "female", "children" -> Association[]], "Bob" -> Association[ "age" -> 41, "sex" -> "male", "children" -> Association[ "Bob" -> Association["age" -> 1, "sex" -> "male"], "Bri" -> Association["age" -> 3, "sex" -> "female"], "Dan" -> Association["age" -> 6, "sex" -> "male"]]], "Ann" -> Association[ "age" -> 35, "sex" -> "female", "children" -> Association[ "Amy" -> Association["age" -> 6, "sex" -> "female"]]], "Cal" -> Association[ "age" -> 60, "sex" -> "female", "children" -> Association[]]], TypeSystem`Assoc[ TypeSystem`Atom[String], TypeSystem`Struct[{"age", "sex", "children"}, { TypeSystem`Atom[Integer], TypeSystem`Atom[ TypeSystem`Enumeration["female", "male"]], TypeSystem`Assoc[ TypeSystem`Atom[String], TypeSystem`Struct[{"age", "sex"}, { TypeSystem`Atom[Integer], TypeSystem`Atom[String]}], TypeSystem`AnyLength]}], 5], Association["ID" -> 165274837883637, MaxItems -> {All, All, All}]], Alignment -> {"age" -> Right}, Background -> LightOrange, ItemStyle -> {"children" -> Italic}] |
Each of the styling options has an analogous header option that operates on the Dataset’s headers rather than the items:
✕
Dataset[Dataset[ Association[ "Deb" -> Association[ "age" -> 62, "sex" -> "female", "children" -> Association[ "Hal" -> Association["age" -> 29, "sex" -> "male"], "Kat" -> Association["age" -> 31, "sex" -> "female"]]], "Eva" -> Association[ "age" -> 43, "sex" -> "female", "children" -> Association[]], "Bob" -> Association[ "age" -> 41, "sex" -> "male", "children" -> Association[ "Bob" -> Association["age" -> 1, "sex" -> "male"], "Bri" -> Association["age" -> 3, "sex" -> "female"], "Dan" -> Association["age" -> 6, "sex" -> "male"]]], "Ann" -> Association[ "age" -> 35, "sex" -> "female", "children" -> Association[ "Amy" -> Association["age" -> 6, "sex" -> "female"]]], "Cal" -> Association[ "age" -> 60, "sex" -> "female", "children" -> Association[]]], TypeSystem`Assoc[ TypeSystem`Atom[String], TypeSystem`Struct[{"age", "sex", "children"}, { TypeSystem`Atom[Integer], TypeSystem`Atom[ TypeSystem`Enumeration["female", "male"]], TypeSystem`Assoc[ TypeSystem`Atom[String], TypeSystem`Struct[{"age", "sex"}, { TypeSystem`Atom[Integer], TypeSystem`Atom[String]}], TypeSystem`AnyLength]}], 5], Association["ID" -> 165274837883637, MaxItems -> {All, All, All}]], Alignment -> {"age" -> Right}, Background -> LightOrange, ItemStyle -> {"children" -> Italic}, HeaderAlignment -> {"age" -> Right}, HeaderBackground -> LightRed, HeaderStyle -> Bold] |
If the basic styling options don’t meet your needs, you can take complete control of item and header formatting with the ItemDisplayFunction and HeaderDisplayFunction options.
Here is an item display function that replaces “male” and “female” with the symbols for male and female, and a header display function that changes the “sex” headers accordingly:
✕
Dataset[Dataset[ Association[ "Deb" -> Association[ "age" -> 62, "sex" -> "female", "children" -> Association[ "Hal" -> Association["age" -> 29, "sex" -> "male"], "Kat" -> Association["age" -> 31, "sex" -> "female"]]], "Eva" -> Association[ "age" -> 43, "sex" -> "female", "children" -> Association[]], "Bob" -> Association[ "age" -> 41, "sex" -> "male", "children" -> Association[ "Bob" -> Association["age" -> 1, "sex" -> "male"], "Bri" -> Association["age" -> 3, "sex" -> "female"], "Dan" -> Association["age" -> 6, "sex" -> "male"]]], "Ann" -> Association[ "age" -> 35, "sex" -> "female", "children" -> Association[ "Amy" -> Association["age" -> 6, "sex" -> "female"]]], "Cal" -> Association[ "age" -> 60, "sex" -> "female", "children" -> Association[]]], TypeSystem`Assoc[ TypeSystem`Atom[String], TypeSystem`Struct[{"age", "sex", "children"}, { TypeSystem`Atom[Integer], TypeSystem`Atom[ TypeSystem`Enumeration["female", "male"]], TypeSystem`Assoc[ TypeSystem`Atom[String], TypeSystem`Struct[{"age", "sex"}, { TypeSystem`Atom[Integer], TypeSystem`Atom[String]}], TypeSystem`AnyLength]}], 5], Association["ID" -> 165274837883637, MaxItems -> {All, All, All}]], ItemDisplayFunction -> {"sex" -> (If[# === "male", \[Mars], \[Venus]] &)}, HeaderDisplayFunction -> {"sex" -> ("\[Mars]/\[Venus]" &)}] |
The display function is given three arguments: the item or header value, the path to the item or header and the entire dataset itself. Here is a header display function that uses the second (path) argument to highlight children with the same name as their parent:
✕
Dataset[Dataset[ Association[ "Deb" -> Association[ "age" -> 62, "sex" -> "female", "children" -> Association[ "Hal" -> Association["age" -> 29, "sex" -> "male"], "Kat" -> Association["age" -> 31, "sex" -> "female"]]], "Eva" -> Association[ "age" -> 43, "sex" -> "female", "children" -> Association[]], "Bob" -> Association[ "age" -> 41, "sex" -> "male", "children" -> Association[ "Bob" -> Association["age" -> 1, "sex" -> "male"], "Bri" -> Association["age" -> 3, "sex" -> "female"], "Dan" -> Association["age" -> 6, "sex" -> "male"]]], "Ann" -> Association[ "age" -> 35, "sex" -> "female", "children" -> Association[ "Amy" -> Association["age" -> 6, "sex" -> "female"]]], "Cal" -> Association[ "age" -> 60, "sex" -> "female", "children" -> Association[]]], TypeSystem`Assoc[ TypeSystem`Atom[String], TypeSystem`Struct[{"age", "sex", "children"}, { TypeSystem`Atom[Integer], TypeSystem`Atom[ TypeSystem`Enumeration["female", "male"]], TypeSystem`Assoc[ TypeSystem`Atom[String], TypeSystem`Struct[{"age", "sex"}, { TypeSystem`Atom[Integer], TypeSystem`Atom[String]}], TypeSystem`AnyLength]}], 5], Association["ID" -> 165274837883637, MaxItems -> {All, All, All}]], HeaderDisplayFunction -> (If[MatchQ[#2, {x_, "children", x_}], Style[#, Bold, Red], #] &)] |
Specify which Dataset items are initially hidden with the HiddenItems option:
✕
Dataset[Dataset[ Association[ "Deb" -> Association[ "age" -> 62, "sex" -> "female", "children" -> Association[ "Hal" -> Association["age" -> 29, "sex" -> "male"], "Kat" -> Association["age" -> 31, "sex" -> "female"]]], "Eva" -> Association[ "age" -> 43, "sex" -> "female", "children" -> Association[]], "Bob" -> Association[ "age" -> 41, "sex" -> "male", "children" -> Association[ "Bob" -> Association["age" -> 1, "sex" -> "male"], "Bri" -> Association["age" -> 3, "sex" -> "female"], "Dan" -> Association["age" -> 6, "sex" -> "male"]]], "Ann" -> Association[ "age" -> 35, "sex" -> "female", "children" -> Association[ "Amy" -> Association["age" -> 6, "sex" -> "female"]]], "Cal" -> Association[ "age" -> 60, "sex" -> "female", "children" -> Association[]]], TypeSystem`Assoc[ TypeSystem`Atom[String], TypeSystem`Struct[{"age", "sex", "children"}, { TypeSystem`Atom[Integer], TypeSystem`Atom[ TypeSystem`Enumeration["female", "male"]], TypeSystem`Assoc[ TypeSystem`Atom[String], TypeSystem`Struct[{"age", "sex"}, { TypeSystem`Atom[Integer], TypeSystem`Atom[String]}], TypeSystem`AnyLength]}], 5], Association["ID" -> 165274837883637, MaxItems -> {All, All, All}]], HiddenItems -> {"Eva", "sex"}] |
To hide all items by default and unhide individual items, use All to hide everything and then make exceptions using path→False:
✕
Dataset[Dataset[ Association[ "Deb" -> Association[ "age" -> 62, "sex" -> "female", "children" -> Association[ "Hal" -> Association["age" -> 29, "sex" -> "male"], "Kat" -> Association["age" -> 31, "sex" -> "female"]]], "Eva" -> Association[ "age" -> 43, "sex" -> "female", "children" -> Association[]], "Bob" -> Association[ "age" -> 41, "sex" -> "male", "children" -> Association[ "Bob" -> Association["age" -> 1, "sex" -> "male"], "Bri" -> Association["age" -> 3, "sex" -> "female"], "Dan" -> Association["age" -> 6, "sex" -> "male"]]], "Ann" -> Association[ "age" -> 35, "sex" -> "female", "children" -> Association[ "Amy" -> Association["age" -> 6, "sex" -> "female"]]], "Cal" -> Association[ "age" -> 60, "sex" -> "female", "children" -> Association[]]], TypeSystem`Assoc[ TypeSystem`Atom[String], TypeSystem`Struct[{"age", "sex", "children"}, { TypeSystem`Atom[Integer], TypeSystem`Atom[ TypeSystem`Enumeration["female", "male"]], TypeSystem`Assoc[ TypeSystem`Atom[String], TypeSystem`Struct[{"age", "sex"}, { TypeSystem`Atom[Integer], TypeSystem`Atom[String]}], TypeSystem`AnyLength]}], 5], Association["ID" -> 165274837883637, MaxItems -> {All, All, All}]], HiddenItems -> {All, {"Bob"} -> False}] |
Make exceptions to the exceptions to hide unhidden items using path→True:
✕
Dataset[Dataset[ Association[ "Deb" -> Association[ "age" -> 62, "sex" -> "female", "children" -> Association[ "Hal" -> Association["age" -> 29, "sex" -> "male"], "Kat" -> Association["age" -> 31, "sex" -> "female"]]], "Eva" -> Association[ "age" -> 43, "sex" -> "female", "children" -> Association[]], "Bob" -> Association[ "age" -> 41, "sex" -> "male", "children" -> Association[ "Bob" -> Association["age" -> 1, "sex" -> "male"], "Bri" -> Association["age" -> 3, "sex" -> "female"], "Dan" -> Association["age" -> 6, "sex" -> "male"]]], "Ann" -> Association[ "age" -> 35, "sex" -> "female", "children" -> Association[ "Amy" -> Association["age" -> 6, "sex" -> "female"]]], "Cal" -> Association[ "age" -> 60, "sex" -> "female", "children" -> Association[]]], TypeSystem`Assoc[ TypeSystem`Atom[String], TypeSystem`Struct[{"age", "sex", "children"}, { TypeSystem`Atom[Integer], TypeSystem`Atom[ TypeSystem`Enumeration["female", "male"]], TypeSystem`Assoc[ TypeSystem`Atom[String], TypeSystem`Struct[{"age", "sex"}, { TypeSystem`Atom[Integer], TypeSystem`Atom[String]}], TypeSystem`AnyLength]}], 5], Association["ID" -> 165274837883637, MaxItems -> {All, All, All}]], HiddenItems -> {All, {"Bob"} -> False, "sex" -> True}] |
Pre-12.1, the only control you had over how many Dataset items were displayed was via Dataset`$DatasetTargetRowCount. In 12.1, the MaxItems option gives you control over the number of rows displayed as well as columns and deeper levels. To limit the number of rows displayed to 3, specify MaxItems→3:
✕
Dataset[Dataset[ Association[ "Mercury" -> Association[ "Radius" -> Quantity[2439.7`5., "Kilometers"], "Moons" -> Association[]], "Venus" -> Association[ "Radius" -> Quantity[6051.85`5., "Kilometers"], "Moons" -> Association[]], "Earth" -> Association[ "Radius" -> Quantity[ 6367.4446571000000000001`8.299868708313456, "Kilometers"], "Moons" -> Association[ "Moon" -> Association[ "Mass" -> Quantity[ 7.3459006322855173653772`4.995678626217362*^22, "Kilograms"], "Radius" -> Quantity[1737.5`5., "Kilometers"]]]], "Mars" -> Association[ "Radius" -> Quantity[3385.595`4.298042852900571, "Kilometers"], "Moons" -> Association[ "Phobos" -> Association[ "Mass" -> Quantity[ 1.0724880884600402`3.9586073148417724*^16, "Kilograms"], "Radius" -> Quantity[11.1`3., "Kilometers"]], "Deimos" -> Association[ "Mass" -> Quantity[ 1.468340774924336`1.9995659225206786*^15, "Kilograms"], "Radius" -> Quantity[6.2`2., "Kilometers"]]]], "Jupiter" -> Association[ "Radius" -> Quantity[69173.`5., "Kilometers"], "Moons" -> Association[ "Metis" -> Association[ "Mass" -> Quantity[ 1.19864553055047796`0.9999565727231415*^17, "Kilograms"], "Radius" -> Quantity[21.5`3., "Kilometers"]], "Adrastea" -> Association[ "Mass" -> Quantity[ 7.491534565940487`0.9999565727231415*^15, "Kilograms"], "Radius" -> Quantity[8.2`2., "Kilometers"]], "Amalthea" -> Association[ "Mass" -> Quantity[ 2.067663540199574478`2.995678626217367*^18, "Kilograms"], "Radius" -> Quantity[83.45`4., "Kilometers"]], "Thebe" -> Association[ "Mass" -> Quantity[ 1.49830691318809745`1.9995659225206872*^18, "Kilograms"], "Radius" -> Quantity[49.3`3., "Kilometers"]], "Io" -> Association[ "Mass" -> Quantity[ 8.9297833448203530011087`4.995678626217362*^22, "Kilograms"], "Radius" -> Quantity[1821.6`5., "Kilometers"]], "Europa" -> Association[ "Mass" -> Quantity[ 4.7986859848371340385365`4.995678626217362*^22, "Kilograms"], "Radius" -> Quantity[1560.8`5., "Kilometers"]], "Ganymede" -> Association[ "Mass" -> Quantity[ 1.48150100386563183602529`4.995678626217362*^23, "Kilograms"], "Radius" -> Quantity[2631.2`5., "Kilometers"]], "Callisto" -> Association[ "Mass" -> Quantity[ 1.07567783404752629528633`4.995678626217362*^23, "Kilograms"], "Radius" -> Quantity[2410.3`5., "Kilometers"]], "Themisto" -> Association[ "Mass" -> Quantity[ 6.89221180066526`1.9995659225206872*^14, "Kilograms"], "Radius" -> Quantity[4.`2., "Kilometers"]], "Leda" -> Association[ "Mass" -> Quantity[ 1.0937640466273112`1.9995659225206872*^16, "Kilograms"], "Radius" -> Quantity[10.`2., "Kilometers"]], "Himalia" -> Association[ "Mass" -> Quantity[ 6.742381109346438525`1.999565922520683*^18, "Kilograms"], "Radius" -> Quantity[85.`2., "Kilometers"]], "Lysithea" -> Association[ "Mass" -> Quantity[ 6.2928890353900092`1.999565922520683*^16, "Kilograms"], "Radius" -> Quantity[18.`2., "Kilometers"]], "Elara" -> Association[ "Mass" -> Quantity[ 8.6901800964909652`1.9995659225206872*^17, "Kilograms"], "Radius" -> Quantity[43.`2., "Kilometers"]], "S/2000 J11" -> Association[ "Mass" -> Missing["NotAvailable"], "Radius" -> Quantity[2.`2., "Kilometers"]], "S/2003 J12" -> Association[ "Mass" -> Missing["NotAvailable"], "Radius" -> Quantity[1.`2., "Kilometers"]], "Carpo" -> Association[ "Mass" -> Missing["NotAvailable"], "Radius" -> Quantity[3.`2., "Kilometers"]], "Euporie" -> Association[ "Mass" -> Quantity[ 1.4983069131881`0.9999565727231415*^13, "Kilograms"], "Radius" -> Quantity[1.`2., "Kilometers"]], "S/2003 J3" -> Association[ "Mass" -> Missing["NotAvailable"], "Radius" -> Quantity[2.`2., "Kilometers"]], "S/2003 J18" -> Association[ "Mass" -> Missing["NotAvailable"], "Radius" -> Quantity[2.`2., "Kilometers"]], "Orthosie" -> Association[ "Mass" -> Quantity[ 1.4983069131881`0.9999565727231415*^13, "Kilograms"], "Radius" -> Quantity[1.`2., "Kilometers"]], "Euanthe" -> Association[ "Mass" -> Quantity[ 4.4949207395643`0.9999565727231415*^13, "Kilograms"], "Radius" -> Quantity[1.5`2., "Kilometers"]], "Harpalyke" -> Association[ "Mass" -> Quantity[ 1.19864553055047`0.9999565727231415*^14, "Kilograms"], "Radius" -> Quantity[2.2`2., "Kilometers"]], "Praxidike" -> Association[ "Mass" -> Quantity[ 4.34509004824548`1.9995659225206872*^14, "Kilograms"], "Radius" -> Quantity[3.4`2., "Kilometers"]], "Thyone" -> Association[ "Mass" -> Quantity[ 8.9898414791287`0.9999565727231415*^13, "Kilograms"], "Radius" -> Quantity[2.`2., "Kilometers"]], "S/2003 J16" -> Association[ "Mass" -> Missing["NotAvailable"], "Radius" -> Quantity[2.`2., "Kilometers"]], "Iocaste" -> Association[ "Mass" -> Quantity[ 1.94779898714453`1.9995659225206872*^14, "Kilograms"], "Radius" -> Quantity[2.6`2., "Kilometers"]], "Mneme" -> Association[ "Mass" -> Missing["NotAvailable"], "Radius" -> Quantity[2.`2., "Kilometers"]], "Hermippe" -> Association[ "Mass" -> Quantity[ 8.9898414791287`0.9999565727231415*^13, "Kilograms"], "Radius" -> Quantity[2.`2., "Kilometers"]], "Thelxinoe" -> Association[ "Mass" -> Missing["NotAvailable"], "Radius" -> Quantity[2.`2., "Kilometers"]], "Helike" -> Association[ "Mass" -> Missing["NotAvailable"], "Radius" -> Quantity[4.`2., "Kilometers"]], "Ananke" -> Association[ "Mass" -> Quantity[ 2.9966138263761948`1.9995659225206872*^16, "Kilograms"], "Radius" -> Quantity[14.`2., "Kilometers"]], "S/2003 J15" -> Association[ "Mass" -> Missing["NotAvailable"], "Radius" -> Quantity[2.`2., "Kilometers"]], "Eurydome" -> Association[ "Mass" -> Quantity[ 4.4949207395643`0.9999565727231415*^13, "Kilograms"], "Radius" -> Quantity[1.5`2., "Kilometers"]], "Arche" -> Association[ "Mass" -> Missing["NotAvailable"], "Radius" -> Quantity[1.5`2., "Kilometers"]], "Herse" -> Association[ "Mass" -> Missing["NotAvailable"], "Radius" -> Quantity[2.`2., "Kilometers"]], "Pasithee" -> Association[ "Mass" -> Quantity[ 1.4983069131881`0.9999565727231415*^13, "Kilograms"], "Radius" -> Quantity[1.`2., "Kilometers"]], "S/2003 J10" -> Association[ "Mass" -> Missing["NotAvailable"], "Radius" -> Quantity[2.`2., "Kilometers"]], "Chaldene" -> Association[ "Mass" -> Quantity[ 7.4915345659396`0.9999565727231415*^13, "Kilograms"], "Radius" -> Quantity[1.9`2., "Kilometers"]], "Isonoe" -> Association[ "Mass" -> Quantity[ 7.4915345659396`0.9999565727231415*^13, "Kilograms"], "Radius" -> Quantity[1.9`2., "Kilometers"]], "Erinome" -> Association[ "Mass" -> Quantity[ 4.4949207395643`0.9999565727231415*^13, "Kilograms"], "Radius" -> Quantity[1.6`2., "Kilometers"]], "Kale" -> Association[ "Mass" -> Quantity[ 1.4983069131881`0.9999565727231415*^13, "Kilograms"], "Radius" -> Quantity[1.`2., "Kilometers"]], "Aitne" -> Association[ "Mass" -> Quantity[ 4.4949207395643`0.9999565727231415*^13, "Kilograms"], "Radius" -> Quantity[1.5`2., "Kilometers"]], "Taygete" -> Association[ "Mass" -> Quantity[ 1.6481376045069`1.9995659225206872*^14, "Kilograms"], "Radius" -> Quantity[2.5`2., "Kilometers"]], "S/2003 J9" -> Association[ "Mass" -> Missing["NotAvailable"], "Radius" -> Quantity[1.`2., "Kilometers"]], "Carme" -> Association[ "Mass" -> Quantity[ 1.31851008360552575`1.9995659225206786*^17, "Kilograms"], "Radius" -> Quantity[23.`2., "Kilometers"]], "Sponde" -> Association[ "Mass" -> Quantity[ 1.4983069131881`0.9999565727231415*^13, "Kilograms"], "Radius" -> Quantity[1.`2., "Kilometers"]], "Megaclite" -> Association[ "Mass" -> Quantity[ 2.09762967846334`1.9995659225206872*^14, "Kilograms"], "Radius" -> Quantity[2.7`2., "Kilometers"]], "S/2003 J5" -> Association[ "Mass" -> Missing["NotAvailable"], "Radius" -> Quantity[4.`2., "Kilometers"]], "S/2003 J19" -> Association[ "Mass" -> Missing["NotAvailable"], "Radius" -> Quantity[2.`2., "Kilometers"]], "S/2003 J23" -> Association[ "Mass" -> Missing["NotAvailable"], "Radius" -> Quantity[2.`2., "Kilometers"]], "Kalyke" -> Association[ "Mass" -> Quantity[ 1.94779898714453`1.9995659225206872*^14, "Kilograms"], "Radius" -> Quantity[2.6`2., "Kilometers"]], "Kore" -> Association[ "Mass" -> Missing["NotAvailable"], "Radius" -> Quantity[2.`2., "Kilometers"]], "Pasiphae" -> Association[ "Mass" -> Quantity[ 2.9966138263761949`1.9995659225206786*^17, "Kilograms"], "Radius" -> Quantity[30.`2., "Kilometers"]], "Eukelade" -> Association[ "Mass" -> Missing["NotAvailable"], "Radius" -> Quantity[4.`2., "Kilometers"]], "S/2003 J4" -> Association[ "Mass" -> Missing["NotAvailable"], "Radius" -> Quantity[2.`2., "Kilometers"]], "Sinope" -> Association[ "Mass" -> Quantity[ 7.4915345659404873`1.9995659225206786*^16, "Kilograms"], "Radius" -> Quantity[19.`2., "Kilometers"]], "Hegemone" -> Association[ "Mass" -> Missing["NotAvailable"], "Radius" -> Quantity[3.`2., "Kilometers"]], "Aoede" -> Association[ "Mass" -> Missing["NotAvailable"], "Radius" -> Quantity[4.`2., "Kilometers"]], "Kallichore" -> Association[ "Mass" -> Missing["NotAvailable"], "Radius" -> Quantity[2.`2., "Kilometers"]], "Autonoe" -> Association[ "Mass" -> Quantity[ 8.9898414791287`0.9999565727231415*^13, "Kilograms"], "Radius" -> Quantity[2.`2., "Kilometers"]], "Callirrhoe" -> Association[ "Mass" -> Quantity[ 8.69018009649097`1.9995659225206872*^14, "Kilograms"], "Radius" -> Quantity[4.3`2., "Kilometers"]], "Cyllene" -> Association[ "Mass" -> Missing["NotAvailable"], "Radius" -> Quantity[2.`2., "Kilometers"]], "S/2003 J2" -> Association[ "Mass" -> Missing["NotAvailable"], "Radius" -> Quantity[2.`2., "Kilometers"]]]], "Saturn" -> Association[ "Radius" -> Quantity[57316.`5., "Kilometers"], "Moons" -> Association[ "Tarqeq" -> Association[ "Mass" -> Missing["NotAvailable"], "Radius" -> Quantity[3.5`2., "Kilometers"]], "Pan" -> Association[ "Mass" -> Quantity[ 4.944412813520729`1.9995659225206872*^15, "Kilograms"], "Radius" -> Quantity[12.8`3., "Kilometers"]], "Daphnis" -> Association[ "Mass" -> Missing["NotAvailable"], "Radius" -> Quantity[3.9`2., "Kilometers"]], "Atlas" -> Association[ "Mass" -> Quantity[ 2.097629678463337`1.9995659225206786*^15, "Kilograms"], "Radius" -> Quantity[10.`2., "Kilometers"]], "Prometheus" -> Association[ "Mass" -> Quantity[ 1.86689041383236942`3.9586073148417764*^17, "Kilograms"], "Radius" -> Quantity[46.8`3., "Kilometers"]], "Pandora" -> Association[ "Mass" -> Quantity[ 1.49081537862215657`2.9956786262173587*^17, "Kilograms"], "Radius" -> Quantity[40.6`3., "Kilometers"]], "Epimetheus" -> Association[ "Mass" -> Quantity[ 5.25905726529022205`2.9956786262173543*^17, "Kilograms"], "Radius" -> Quantity[58.3`3., "Kilometers"]], "Janus" -> Association[ "Mass" -> Quantity[ 1.896856552096131371`3.9586073148417764*^18, "Kilograms"], "Radius" -> Quantity[90.4`3., "Kilometers"]], "Aegaeon" -> Association[ "Mass" -> Missing["NotAvailable"], "Radius" -> Quantity[0.25`2., "Kilometers"]], "Mimas" -> Association[ "Mass" -> Quantity[ 3.7907164903658865482`3.9586073148417764*^19, "Kilograms"], "Radius" -> Quantity[198.8`4., "Kilometers"]], "Methone" -> Association[ "Mass" -> Missing["NotAvailable"], "Radius" -> Quantity[1.6`2., "Kilometers"]], "Anthe" -> Association[ "Mass" -> Quantity[5.`1.*^12, "Kilograms"], "Radius" -> Quantity[1.`1., "Kilometers"]], "Pallene" -> Association[ "Mass" -> Missing["NotAvailable"], "Radius" -> Quantity[2.6`2., "Kilometers"]], "Enceladus" -> Association[ "Mass" -> Quantity[ 1.08027928440861826137`3.9586073148417764*^20, "Kilograms"], "Radius" -> Quantity[252.3`4., "Kilometers"]], "Tethys" -> Association[ "Mass" -> Quantity[ 6.17452278924814959099`4.6989700043360205*^20, "Kilograms"], "Radius" -> Quantity[536.3`4., "Kilometers"]], "Calypso" -> Association[ "Mass" -> Quantity[ 3.595936591651433`1.9995659225206872*^15, "Kilograms"], "Radius" -> Quantity[9.5`2., "Kilometers"]], "Telesto" -> Association[ "Mass" -> Quantity[ 7.191873183302868`1.9995659225206872*^15, "Kilograms"], "Radius" -> Quantity[12.`2., "Kilometers"]], "Polydeuces" -> Association[ "Mass" -> Missing["NotAvailable"], "Radius" -> Quantity[1.2`2., "Kilometers"]], "Dione" -> Association[ "Mass" -> Quantity[ 1.095457133439213688532`4.6989700043360205*^21, "Kilograms"], "Radius" -> Quantity[562.5`4., "Kilometers"]], "Helene" -> Association[ "Mass" -> Quantity[ 2.5471217524197656`1.9995659225206872*^16, "Kilograms"], "Radius" -> Quantity[16.`2., "Kilometers"]], "Rhea" -> Association[ "Mass" -> Quantity[ 2.308441461148901741032`4.6989700043360205*^21, "Kilograms"], "Radius" -> Quantity[764.5`4., "Kilometers"]], "Titan" -> Association[ "Mass" -> Quantity[ 1.34520841449162446435527`4.958607314841778*^23, "Kilograms"], "Radius" -> Quantity[2575.5`5., "Kilometers"]], "Hyperion" -> Association[ "Mass" -> Quantity[ 5.543735578795960565`1.9995659225206872*^18, "Kilograms"], "Radius" -> Quantity[133.`4., "Kilometers"]], "Iapetus" -> Association[ "Mass" -> Quantity[ 1.805459830391657427108`4.6989700043360205*^21, "Kilograms"], "Radius" -> Quantity[734.5`4., "Kilometers"]], "Kiviuq" -> Association[ "Mass" -> Quantity[ 3.296275209013815`1.9995659225206872*^15, "Kilograms"], "Radius" -> Quantity[8.`1., "Kilometers"]], "Ijiraq" -> Association[ "Mass" -> Quantity[ 1.198645530550478`1.9995659225206872*^15, "Kilograms"], "Radius" -> Quantity[6.`1., "Kilometers"]], "Phoebe" -> Association[ "Mass" -> Quantity[ 8.287135536843366995`3.9586073148417764*^18, "Kilograms"], "Radius" -> Quantity[106.6`4., "Kilometers"]], "Paaliaq" -> Association[ "Mass" -> Quantity[ 8.240688022534537`1.999565922520683*^15, "Kilograms"], "Radius" -> Quantity[11.`3., "Kilometers"]], "Skathi" -> Association[ "Mass" -> Quantity[ 3.146444517695`1.9995659225206786*^14, "Kilograms"], "Radius" -> Quantity[4.`1., "Kilometers"]], "Albiorix" -> Association[ "Mass" -> Quantity[ 2.0976296784633363`1.9995659225206872*^16, "Kilograms"], "Radius" -> Quantity[16.`2., "Kilometers"]], "S/2007 S2" -> Association[ "Mass" -> Quantity[1.5`2.*^14, "Kilograms"], "Radius" -> Quantity[3.`1., "Kilometers"]], "Bebhionn" -> Association[ "Mass" -> Missing["NotAvailable"], "Radius" -> Quantity[3.`2., "Kilometers"]], "Erriapo" -> Association[ "Mass" -> Quantity[ 7.64136525725929`1.9995659225206914*^14, "Kilograms"], "Radius" -> Quantity[5.`1., "Kilometers"]], "Siarnaq" -> Association[ "Mass" -> Quantity[ 3.8955979742890535`1.999565922520683*^16, "Kilograms"], "Radius" -> Quantity[20.`2., "Kilometers"]], "Skoll" -> Association[ "Mass" -> Missing["NotAvailable"], "Radius" -> Quantity[3.`2., "Kilometers"]], "Tarvos" -> Association[ "Mass" -> Quantity[ 2.696952443738576`1.9995659225206786*^15, "Kilograms"], "Radius" -> Quantity[7.5`2., "Kilometers"]], "Greip" -> Association[ "Mass" -> Missing["NotAvailable"], "Radius" -> Quantity[3.`2., "Kilometers"]], "S/2004 S13" -> Association[ "Mass" -> Missing["NotAvailable"], "Radius" -> Quantity[3.`2., "Kilometers"]], "Hyrrokkin" -> Association[ "Mass" -> Missing["NotAvailable"], "Radius" -> Quantity[4.`2., "Kilometers"]], "Mundilfari" -> Association[ "Mass" -> Quantity[ 2.09762967846334`1.9995659225206872*^14, "Kilograms"], "Radius" -> Quantity[3.5`2., "Kilometers"]], "S/2006 S1" -> Association[ "Mass" -> Missing["NotAvailable"], "Radius" -> Quantity[3.`2., "Kilometers"]], "Jarnsaxa" -> Association[ "Mass" -> Missing["NotAvailable"], "Radius" -> Quantity[3.`2., "Kilometers"]], "Narvi" -> Association[ "Mass" -> Quantity[ 3.44610590033262`1.9995659225206872*^14, "Kilograms"], "Radius" -> Quantity[3.5`2., "Kilometers"]], "Bergelmir" -> Association[ "Mass" -> Missing["NotAvailable"], "Radius" -> Quantity[3.`2., "Kilometers"]], "S/2004 S17" -> Association[ "Mass" -> Missing["NotAvailable"], "Radius" -> Quantity[2.`2., "Kilometers"]], "Suttungr" -> Association[ "Mass" -> Quantity[ 2.09762967846334`1.9995659225206872*^14, "Kilograms"], "Radius" -> Quantity[3.5`2., "Kilometers"]], "Hati" -> Association[ "Mass" -> Missing["NotAvailable"], "Radius" -> Quantity[3.`2., "Kilometers"]], "S/2004 S12" -> Association[ "Mass" -> Missing["NotAvailable"], "Radius" -> Quantity[3.`2., "Kilometers"]], "Bestla" -> Association[ "Mass" -> Missing["NotAvailable"], "Radius" -> Quantity[3.`2., "Kilometers"]], "Farbauti" -> Association[ "Mass" -> Missing["NotAvailable"], "Radius" -> Quantity[3.`2., "Kilometers"]], "Thrymr" -> Association[ "Mass" -> Quantity[ 2.09762967846334`1.9995659225206872*^14, "Kilograms"], "Radius" -> Quantity[3.5`2., "Kilometers"]], "S/2007 S3" -> Association[ "Mass" -> Missing["NotAvailable"], "Radius" -> Quantity[2.5`2., "Kilometers"]], "Aegir" -> Association[ "Mass" -> Missing["NotAvailable"], "Radius" -> Quantity[3.`2., "Kilometers"]], "S/2004 S7" -> Association[ "Mass" -> Missing["NotAvailable"], "Radius" -> Quantity[3.`2., "Kilometers"]], "S/2006 S3" -> Association[ "Mass" -> Missing["NotAvailable"], "Radius" -> Quantity[3.`2., "Kilometers"]], "Kari" -> Association[ "Mass" -> Missing["NotAvailable"], "Radius" -> Quantity[3.`2., "Kilometers"]], "Fenrir" -> Association[ "Mass" -> Missing["NotAvailable"], "Radius" -> Quantity[2.`2., "Kilometers"]], "Surtur" -> Association[ "Mass" -> Missing["NotAvailable"], "Radius" -> Quantity[3.`2., "Kilometers"]], "Ymir" -> Association[ "Mass" -> Quantity[ 4.944412813520729`1.9995659225206872*^15, "Kilograms"], "Radius" -> Quantity[9.`1., "Kilometers"]], "Loge" -> Association[ "Mass" -> Missing["NotAvailable"], "Radius" -> Quantity[3.`2., "Kilometers"]], "Fornjot" -> Association[ "Mass" -> Missing["NotAvailable"], "Radius" -> Quantity[3.`2., "Kilometers"]]]], "Uranus" -> Association[ "Radius" -> Quantity[25266.`5., "Kilometers"], "Moons" -> Association[ "Cordelia" -> Association[ "Mass" -> Quantity[ 4.4949207395642923`1.9995659225206872*^16, "Kilograms"], "Radius" -> Quantity[20.1`3., "Kilometers"]], "Ophelia" -> Association[ "Mass" -> Quantity[ 5.3939048874771508`1.9995659225206872*^16, "Kilograms"], "Radius" -> Quantity[21.4`3., "Kilometers"]], "Bianca" -> Association[ "Mass" -> Quantity[ 9.2895028617662042`1.9995659225206872*^16, "Kilograms"], "Radius" -> Quantity[25.7`3., "Kilometers"]], "Cressida" -> Association[ "Mass" -> Quantity[ 3.43112283120074311`2.9956786262173587*^17, "Kilograms"], "Radius" -> Quantity[39.8`3., "Kilometers"]], "Desdemona" -> Association[ "Mass" -> Quantity[ 1.78298522669383596`2.995678626217367*^17, "Kilograms"], "Radius" -> Quantity[32.`3., "Kilometers"]], "Juliet" -> Association[ "Mass" -> Quantity[ 5.57370171705972251`2.9956786262173543*^17, "Kilograms"], "Radius" -> Quantity[46.8`3., "Kilometers"]], "Portia" -> Association[ "Mass" -> Quantity[ 1.681100356597045339`3.9586073148417764*^18, "Kilograms"], "Radius" -> Quantity[67.6`3., "Kilometers"]], "Rosalind" -> Association[ "Mass" -> Quantity[ 2.54712175241976567`2.9956786262173587*^17, "Kilograms"], "Radius" -> Quantity[36.`2., "Kilometers"]], "Cupid" -> Association[ "Mass" -> Missing["NotAvailable"], "Radius" -> Quantity[5.`2., "Kilometers"]], "Belinda" -> Association[ "Mass" -> Quantity[ 3.56597045338767194`2.995678626217367*^17, "Kilograms"], "Radius" -> Quantity[40.3`3., "Kilometers"]], "Perdita" -> Association[ "Mass" -> Missing["NotAvailable"], "Radius" -> Quantity[10.`2., "Kilometers"]], "Puck" -> Association[ "Mass" -> Quantity[ 2.893230649366216176`3.9586073148417764*^18, "Kilograms"], "Radius" -> Quantity[81.`2., "Kilometers"]], "Mab" -> Association[ "Mass" -> Missing["NotAvailable"], "Radius" -> Quantity[5.`2., "Kilometers"]], "Miranda" -> Association[ "Mass" -> Quantity[ 6.5925504180276287794`1.9995659225206872*^19, "Kilograms"], "Radius" -> Quantity[235.8`4., "Kilometers"]], "Ariel" -> Association[ "Mass" -> Quantity[ 1.352971142608851997243`2.9956786262173587*^21, "Kilograms"], "Radius" -> Quantity[578.9`4., "Kilometers"]], "Umbriel" -> Association[ "Mass" -> Quantity[ 1.171676006113092205807`2.9956786262173587*^21, "Kilograms"], "Radius" -> Quantity[584.7`4., "Kilometers"]], "Titania" -> Association[ "Mass" -> Quantity[ 3.525516166731593299572`3.9586073148417764*^21, "Kilograms"], "Radius" -> Quantity[788.9`4., "Kilometers"]], "Oberon" -> Association[ "Mass" -> Quantity[ 3.013095202421263971712`3.9586073148417764*^21, "Kilograms"], "Radius" -> Quantity[761.4`4., "Kilometers"]], "Francisco" -> Association[ "Mass" -> Missing["NotAvailable"], "Radius" -> Quantity[11.`2., "Kilometers"]], "Caliban" -> Association[ "Mass" -> Quantity[ 7.34170387462167751`1.9995659225206872*^17, "Kilograms"], "Radius" -> Quantity[49.`2., "Kilometers"]], "Stephano" -> Association[ "Mass" -> Quantity[ 5.99322765275239`0.9999565727231373*^15, "Kilograms"], "Radius" -> Quantity[10.`2., "Kilometers"]], "Trinculo" -> Association[ "Mass" -> Quantity[ 7.49153456594048`0.9999565727231373*^14, "Kilograms"], "Radius" -> Quantity[5.`1., "Kilometers"]], "Sycorax" -> Association[ "Mass" -> Quantity[ 5.378921818345269844`2.9956786262173627*^18, "Kilograms"], "Radius" -> Quantity[95.`2., "Kilometers"]], "Margaret" -> Association[ "Mass" -> Missing["NotAvailable"], "Radius" -> Quantity[10.`2., "Kilometers"]], "Prospero" -> Association[ "Mass" -> Quantity[ 2.0976296784633363`1.9995659225206872*^16, "Kilograms"], "Radius" -> Quantity[15.`2., "Kilometers"]], "Setebos" -> Association[ "Mass" -> Quantity[ 2.0976296784633363`1.9995659225206872*^16, "Kilograms"], "Radius" -> Quantity[15.`2., "Kilometers"]], "Ferdinand" -> Association[ "Mass" -> Missing["NotAvailable"], "Radius" -> Quantity[10.`2., "Kilometers"]]]], "Neptune" -> Association[ "Radius" -> Quantity[24552.5`5., "Kilometers"], "Moons" -> Association[ "Naiad" -> Association[ "Mass" -> Quantity[ 1.94779898714452669`1.9995659225206872*^17, "Kilograms"], "Radius" -> Quantity[33.`2., "Kilometers"]], "Thalassa" -> Association[ "Mass" -> Quantity[ 3.74576728297024363`1.9995659225206872*^17, "Kilograms"], "Radius" -> Quantity[41.`2., "Kilometers"]], "Despina" -> Association[ "Mass" -> Quantity[ 2.09762967846333643`1.9995659225206872*^18, "Kilograms"], "Radius" -> Quantity[75.`2., "Kilometers"]], "Galatea" -> Association[ "Mass" -> Quantity[ 3.745767282970243625`1.9995659225206872*^18, "Kilograms"], "Radius" -> Quantity[88.`2., "Kilometers"]], "Larissa" -> Association[ "Mass" -> Quantity[ 4.944412813520721585`1.999565922520683*^18, "Kilograms"], "Radius" -> Quantity[97.`2., "Kilometers"]], "Proteus" -> Association[ "Mass" -> Quantity[ 5.0343112283120074311`2.995678626217367*^19, "Kilograms"], "Radius" -> Quantity[210.`3., "Kilometers"]], "Triton" -> Association[ "Mass" -> Quantity[ 2.139432441341284348686`4.6989700043360205*^22, "Kilograms"], "Radius" -> Quantity[1353.4`5., "Kilometers"]], "Nereid" -> Association[ "Mass" -> Quantity[ 3.0865122411674807466`2.9956786262173587*^19, "Kilograms"], "Radius" -> Quantity[170.`3., "Kilometers"]], "Halimede" -> Association[ "Mass" -> Missing["NotAvailable"], "Radius" -> Quantity[30.`2., "Kilometers"]], "Sao" -> Association[ "Mass" -> Missing["NotAvailable"], "Radius" -> Quantity[20.`2., "Kilometers"]], "Laomedeia" -> Association[ "Mass" -> Missing["NotAvailable"], "Radius" -> Quantity[20.`2., "Kilometers"]], "Psamathe" -> Association[ "Mass" -> Missing["NotAvailable"], "Radius" -> Quantity[20.`2., "Kilometers"]], "Neso" -> Association[ "Mass" -> Missing["NotAvailable"], "Radius" -> Quantity[30.`2., "Kilometers"]]]]], TypeSystem`Assoc[ TypeSystem`Atom[String], TypeSystem`Struct[{"Radius", "Moons"}, { TypeSystem`Atom[ Quantity[1, "Kilometers"]], TypeSystem`Assoc[ TypeSystem`Atom[String], TypeSystem`Struct[{"Mass", "Radius"}, { TypeSystem`Atom[ Quantity[1, "Kilograms"]], TypeSystem`Atom[ Quantity[1, "Kilometers"]]}], TypeSystem`AnyLength]}], 8], Association["ID" -> 165317787556689]], MaxItems -> 3] |
Give a list to specify limits at multiple levels (rows, columns):
✕
Dataset[Dataset[ Association[ "Mercury" -> Association[ "Radius" -> Quantity[2439.7`5., "Kilometers"], "Moons" -> Association[]], "Venus" -> Association[ "Radius" -> Quantity[6051.85`5., "Kilometers"], "Moons" -> Association[]], "Earth" -> Association[ "Radius" -> Quantity[ 6367.4446571000000000001`8.299868708313456, "Kilometers"], "Moons" -> Association[ "Moon" -> Association[ "Mass" -> Quantity[ 7.3459006322855173653772`4.995678626217362*^22, "Kilograms"], "Radius" -> Quantity[1737.5`5., "Kilometers"]]]], "Mars" -> Association[ "Radius" -> Quantity[3385.595`4.298042852900571, "Kilometers"], "Moons" -> Association[ "Phobos" -> Association[ "Mass" -> Quantity[ 1.0724880884600402`3.9586073148417724*^16, "Kilograms"], "Radius" -> Quantity[11.1`3., "Kilometers"]], "Deimos" -> Association[ "Mass" -> Quantity[ 1.468340774924336`1.9995659225206786*^15, "Kilograms"], "Radius" -> Quantity[6.2`2., "Kilometers"]]]], "Jupiter" -> Association[ "Radius" -> Quantity[69173.`5., "Kilometers"], "Moons" -> Association[ "Metis" -> Association[ "Mass" -> Quantity[ 1.19864553055047796`0.9999565727231415*^17, "Kilograms"], "Radius" -> Quantity[21.5`3., "Kilometers"]], "Adrastea" -> Association[ "Mass" -> Quantity[ 7.491534565940487`0.9999565727231415*^15, "Kilograms"], "Radius" -> Quantity[8.2`2., "Kilometers"]], "Amalthea" -> Association[ "Mass" -> Quantity[ 2.067663540199574478`2.995678626217367*^18, "Kilograms"], "Radius" -> Quantity[83.45`4., "Kilometers"]], "Thebe" -> Association[ "Mass" -> Quantity[ 1.49830691318809745`1.9995659225206872*^18, "Kilograms"], "Radius" -> Quantity[49.3`3., "Kilometers"]], "Io" -> Association[ "Mass" -> Quantity[ 8.9297833448203530011087`4.995678626217362*^22, "Kilograms"], "Radius" -> Quantity[1821.6`5., "Kilometers"]], "Europa" -> Association[ "Mass" -> Quantity[ 4.7986859848371340385365`4.995678626217362*^22, "Kilograms"], "Radius" -> Quantity[1560.8`5., "Kilometers"]], "Ganymede" -> Association[ "Mass" -> Quantity[ 1.48150100386563183602529`4.995678626217362*^23, "Kilograms"], "Radius" -> Quantity[2631.2`5., "Kilometers"]], "Callisto" -> Association[ "Mass" -> Quantity[ 1.07567783404752629528633`4.995678626217362*^23, "Kilograms"], "Radius" -> Quantity[2410.3`5., "Kilometers"]], "Themisto" -> Association[ "Mass" -> Quantity[ 6.89221180066526`1.9995659225206872*^14, "Kilograms"], "Radius" -> Quantity[4.`2., "Kilometers"]], "Leda" -> Association[ "Mass" -> Quantity[ 1.0937640466273112`1.9995659225206872*^16, "Kilograms"], "Radius" -> Quantity[10.`2., "Kilometers"]], "Himalia" -> Association[ "Mass" -> Quantity[ 6.742381109346438525`1.999565922520683*^18, "Kilograms"], "Radius" -> Quantity[85.`2., "Kilometers"]], "Lysithea" -> Association[ "Mass" -> Quantity[ 6.2928890353900092`1.999565922520683*^16, "Kilograms"], "Radius" -> Quantity[18.`2., "Kilometers"]], "Elara" -> Association[ "Mass" -> Quantity[ 8.6901800964909652`1.9995659225206872*^17, "Kilograms"], "Radius" -> Quantity[43.`2., "Kilometers"]], "S/2000 J11" -> Association[ "Mass" -> Missing["NotAvailable"], "Radius" -> Quantity[2.`2., "Kilometers"]], "S/2003 J12" -> Association[ "Mass" -> Missing["NotAvailable"], "Radius" -> Quantity[1.`2., "Kilometers"]], "Carpo" -> Association[ "Mass" -> Missing["NotAvailable"], "Radius" -> Quantity[3.`2., "Kilometers"]], "Euporie" -> Association[ "Mass" -> Quantity[ 1.4983069131881`0.9999565727231415*^13, "Kilograms"], "Radius" -> Quantity[1.`2., "Kilometers"]], "S/2003 J3" -> Association[ "Mass" -> Missing["NotAvailable"], "Radius" -> Quantity[2.`2., "Kilometers"]], "S/2003 J18" -> Association[ "Mass" -> Missing["NotAvailable"], "Radius" -> Quantity[2.`2., "Kilometers"]], "Orthosie" -> Association[ "Mass" -> Quantity[ 1.4983069131881`0.9999565727231415*^13, "Kilograms"], "Radius" -> Quantity[1.`2., "Kilometers"]], "Euanthe" -> Association[ "Mass" -> Quantity[ 4.4949207395643`0.9999565727231415*^13, "Kilograms"], "Radius" -> Quantity[1.5`2., "Kilometers"]], "Harpalyke" -> Association[ "Mass" -> Quantity[ 1.19864553055047`0.9999565727231415*^14, "Kilograms"], "Radius" -> Quantity[2.2`2., "Kilometers"]], "Praxidike" -> Association[ "Mass" -> Quantity[ 4.34509004824548`1.9995659225206872*^14, "Kilograms"], "Radius" -> Quantity[3.4`2., "Kilometers"]], "Thyone" -> Association[ "Mass" -> Quantity[ 8.9898414791287`0.9999565727231415*^13, "Kilograms"], "Radius" -> Quantity[2.`2., "Kilometers"]], "S/2003 J16" -> Association[ "Mass" -> Missing["NotAvailable"], "Radius" -> Quantity[2.`2., "Kilometers"]], "Iocaste" -> Association[ "Mass" -> Quantity[ 1.94779898714453`1.9995659225206872*^14, "Kilograms"], "Radius" -> Quantity[2.6`2., "Kilometers"]], "Mneme" -> Association[ "Mass" -> Missing["NotAvailable"], "Radius" -> Quantity[2.`2., "Kilometers"]], "Hermippe" -> Association[ "Mass" -> Quantity[ 8.9898414791287`0.9999565727231415*^13, "Kilograms"], "Radius" -> Quantity[2.`2., "Kilometers"]], "Thelxinoe" -> Association[ "Mass" -> Missing["NotAvailable"], "Radius" -> Quantity[2.`2., "Kilometers"]], "Helike" -> Association[ "Mass" -> Missing["NotAvailable"], "Radius" -> Quantity[4.`2., "Kilometers"]], "Ananke" -> Association[ "Mass" -> Quantity[ 2.9966138263761948`1.9995659225206872*^16, "Kilograms"], "Radius" -> Quantity[14.`2., "Kilometers"]], "S/2003 J15" -> Association[ "Mass" -> Missing["NotAvailable"], "Radius" -> Quantity[2.`2., "Kilometers"]], "Eurydome" -> Association[ "Mass" -> Quantity[ 4.4949207395643`0.9999565727231415*^13, "Kilograms"], "Radius" -> Quantity[1.5`2., "Kilometers"]], "Arche" -> Association[ "Mass" -> Missing["NotAvailable"], "Radius" -> Quantity[1.5`2., "Kilometers"]], "Herse" -> Association[ "Mass" -> Missing["NotAvailable"], "Radius" -> Quantity[2.`2., "Kilometers"]], "Pasithee" -> Association[ "Mass" -> Quantity[ 1.4983069131881`0.9999565727231415*^13, "Kilograms"], "Radius" -> Quantity[1.`2., "Kilometers"]], "S/2003 J10" -> Association[ "Mass" -> Missing["NotAvailable"], "Radius" -> Quantity[2.`2., "Kilometers"]], "Chaldene" -> Association[ "Mass" -> Quantity[ 7.4915345659396`0.9999565727231415*^13, "Kilograms"], "Radius" -> Quantity[1.9`2., "Kilometers"]], "Isonoe" -> Association[ "Mass" -> Quantity[ 7.4915345659396`0.9999565727231415*^13, "Kilograms"], "Radius" -> Quantity[1.9`2., "Kilometers"]], "Erinome" -> Association[ "Mass" -> Quantity[ 4.4949207395643`0.9999565727231415*^13, "Kilograms"], "Radius" -> Quantity[1.6`2., "Kilometers"]], "Kale" -> Association[ "Mass" -> Quantity[ 1.4983069131881`0.9999565727231415*^13, "Kilograms"], "Radius" -> Quantity[1.`2., "Kilometers"]], "Aitne" -> Association[ "Mass" -> Quantity[ 4.4949207395643`0.9999565727231415*^13, "Kilograms"], "Radius" -> Quantity[1.5`2., "Kilometers"]], "Taygete" -> Association[ "Mass" -> Quantity[ 1.6481376045069`1.9995659225206872*^14, "Kilograms"], "Radius" -> Quantity[2.5`2., "Kilometers"]], "S/2003 J9" -> Association[ "Mass" -> Missing["NotAvailable"], "Radius" -> Quantity[1.`2., "Kilometers"]], "Carme" -> Association[ "Mass" -> Quantity[ 1.31851008360552575`1.9995659225206786*^17, "Kilograms"], "Radius" -> Quantity[23.`2., "Kilometers"]], "Sponde" -> Association[ "Mass" -> Quantity[ 1.4983069131881`0.9999565727231415*^13, "Kilograms"], "Radius" -> Quantity[1.`2., "Kilometers"]], "Megaclite" -> Association[ "Mass" -> Quantity[ 2.09762967846334`1.9995659225206872*^14, "Kilograms"], "Radius" -> Quantity[2.7`2., "Kilometers"]], "S/2003 J5" -> Association[ "Mass" -> Missing["NotAvailable"], "Radius" -> Quantity[4.`2., "Kilometers"]], "S/2003 J19" -> Association[ "Mass" -> Missing["NotAvailable"], "Radius" -> Quantity[2.`2., "Kilometers"]], "S/2003 J23" -> Association[ "Mass" -> Missing["NotAvailable"], "Radius" -> Quantity[2.`2., "Kilometers"]], "Kalyke" -> Association[ "Mass" -> Quantity[ 1.94779898714453`1.9995659225206872*^14, "Kilograms"], "Radius" -> Quantity[2.6`2., "Kilometers"]], "Kore" -> Association[ "Mass" -> Missing["NotAvailable"], "Radius" -> Quantity[2.`2., "Kilometers"]], "Pasiphae" -> Association[ "Mass" -> Quantity[ 2.9966138263761949`1.9995659225206786*^17, "Kilograms"], "Radius" -> Quantity[30.`2., "Kilometers"]], "Eukelade" -> Association[ "Mass" -> Missing["NotAvailable"], "Radius" -> Quantity[4.`2., "Kilometers"]], "S/2003 J4" -> Association[ "Mass" -> Missing["NotAvailable"], "Radius" -> Quantity[2.`2., "Kilometers"]], "Sinope" -> Association[ "Mass" -> Quantity[ 7.4915345659404873`1.9995659225206786*^16, "Kilograms"], "Radius" -> Quantity[19.`2., "Kilometers"]], "Hegemone" -> Association[ "Mass" -> Missing["NotAvailable"], "Radius" -> Quantity[3.`2., "Kilometers"]], "Aoede" -> Association[ "Mass" -> Missing["NotAvailable"], "Radius" -> Quantity[4.`2., "Kilometers"]], "Kallichore" -> Association[ "Mass" -> Missing["NotAvailable"], "Radius" -> Quantity[2.`2., "Kilometers"]], "Autonoe" -> Association[ "Mass" -> Quantity[ 8.9898414791287`0.9999565727231415*^13, "Kilograms"], "Radius" -> Quantity[2.`2., "Kilometers"]], "Callirrhoe" -> Association[ "Mass" -> Quantity[ 8.69018009649097`1.9995659225206872*^14, "Kilograms"], "Radius" -> Quantity[4.3`2., "Kilometers"]], "Cyllene" -> Association[ "Mass" -> Missing["NotAvailable"], "Radius" -> Quantity[2.`2., "Kilometers"]], "S/2003 J2" -> Association[ "Mass" -> Missing["NotAvailable"], "Radius" -> Quantity[2.`2., "Kilometers"]]]], "Saturn" -> Association[ "Radius" -> Quantity[57316.`5., "Kilometers"], "Moons" -> Association[ "Tarqeq" -> Association[ "Mass" -> Missing["NotAvailable"], "Radius" -> Quantity[3.5`2., "Kilometers"]], "Pan" -> Association[ "Mass" -> Quantity[ 4.944412813520729`1.9995659225206872*^15, "Kilograms"], "Radius" -> Quantity[12.8`3., "Kilometers"]], "Daphnis" -> Association[ "Mass" -> Missing["NotAvailable"], "Radius" -> Quantity[3.9`2., "Kilometers"]], "Atlas" -> Association[ "Mass" -> Quantity[ 2.097629678463337`1.9995659225206786*^15, "Kilograms"], "Radius" -> Quantity[10.`2., "Kilometers"]], "Prometheus" -> Association[ "Mass" -> Quantity[ 1.86689041383236942`3.9586073148417764*^17, "Kilograms"], "Radius" -> Quantity[46.8`3., "Kilometers"]], "Pandora" -> Association[ "Mass" -> Quantity[ 1.49081537862215657`2.9956786262173587*^17, "Kilograms"], "Radius" -> Quantity[40.6`3., "Kilometers"]], "Epimetheus" -> Association[ "Mass" -> Quantity[ 5.25905726529022205`2.9956786262173543*^17, "Kilograms"], "Radius" -> Quantity[58.3`3., "Kilometers"]], "Janus" -> Association[ "Mass" -> Quantity[ 1.896856552096131371`3.9586073148417764*^18, "Kilograms"], "Radius" -> Quantity[90.4`3., "Kilometers"]], "Aegaeon" -> Association[ "Mass" -> Missing["NotAvailable"], "Radius" -> Quantity[0.25`2., "Kilometers"]], "Mimas" -> Association[ "Mass" -> Quantity[ 3.7907164903658865482`3.9586073148417764*^19, "Kilograms"], "Radius" -> Quantity[198.8`4., "Kilometers"]], "Methone" -> Association[ "Mass" -> Missing["NotAvailable"], "Radius" -> Quantity[1.6`2., "Kilometers"]], "Anthe" -> Association[ "Mass" -> Quantity[5.`1.*^12, "Kilograms"], "Radius" -> Quantity[1.`1., "Kilometers"]], "Pallene" -> Association[ "Mass" -> Missing["NotAvailable"], "Radius" -> Quantity[2.6`2., "Kilometers"]], "Enceladus" -> Association[ "Mass" -> Quantity[ 1.08027928440861826137`3.9586073148417764*^20, "Kilograms"], "Radius" -> Quantity[252.3`4., "Kilometers"]], "Tethys" -> Association[ "Mass" -> Quantity[ 6.17452278924814959099`4.6989700043360205*^20, "Kilograms"], "Radius" -> Quantity[536.3`4., "Kilometers"]], "Calypso" -> Association[ "Mass" -> Quantity[ 3.595936591651433`1.9995659225206872*^15, "Kilograms"], "Radius" -> Quantity[9.5`2., "Kilometers"]], "Telesto" -> Association[ "Mass" -> Quantity[ 7.191873183302868`1.9995659225206872*^15, "Kilograms"], "Radius" -> Quantity[12.`2., "Kilometers"]], "Polydeuces" -> Association[ "Mass" -> Missing["NotAvailable"], "Radius" -> Quantity[1.2`2., "Kilometers"]], "Dione" -> Association[ "Mass" -> Quantity[ 1.095457133439213688532`4.6989700043360205*^21, "Kilograms"], "Radius" -> Quantity[562.5`4., "Kilometers"]], "Helene" -> Association[ "Mass" -> Quantity[ 2.5471217524197656`1.9995659225206872*^16, "Kilograms"], "Radius" -> Quantity[16.`2., "Kilometers"]], "Rhea" -> Association[ "Mass" -> Quantity[ 2.308441461148901741032`4.6989700043360205*^21, "Kilograms"], "Radius" -> Quantity[764.5`4., "Kilometers"]], "Titan" -> Association[ "Mass" -> Quantity[ 1.34520841449162446435527`4.958607314841778*^23, "Kilograms"], "Radius" -> Quantity[2575.5`5., "Kilometers"]], "Hyperion" -> Association[ "Mass" -> Quantity[ 5.543735578795960565`1.9995659225206872*^18, "Kilograms"], "Radius" -> Quantity[133.`4., "Kilometers"]], "Iapetus" -> Association[ "Mass" -> Quantity[ 1.805459830391657427108`4.6989700043360205*^21, "Kilograms"], "Radius" -> Quantity[734.5`4., "Kilometers"]], "Kiviuq" -> Association[ "Mass" -> Quantity[ 3.296275209013815`1.9995659225206872*^15, "Kilograms"], "Radius" -> Quantity[8.`1., "Kilometers"]], "Ijiraq" -> Association[ "Mass" -> Quantity[ 1.198645530550478`1.9995659225206872*^15, "Kilograms"], "Radius" -> Quantity[6.`1., "Kilometers"]], "Phoebe" -> Association[ "Mass" -> Quantity[ 8.287135536843366995`3.9586073148417764*^18, "Kilograms"], "Radius" -> Quantity[106.6`4., "Kilometers"]], "Paaliaq" -> Association[ "Mass" -> Quantity[ 8.240688022534537`1.999565922520683*^15, "Kilograms"], "Radius" -> Quantity[11.`3., "Kilometers"]], "Skathi" -> Association[ "Mass" -> Quantity[ 3.146444517695`1.9995659225206786*^14, "Kilograms"], "Radius" -> Quantity[4.`1., "Kilometers"]], "Albiorix" -> Association[ "Mass" -> Quantity[ 2.0976296784633363`1.9995659225206872*^16, "Kilograms"], "Radius" -> Quantity[16.`2., "Kilometers"]], "S/2007 S2" -> Association[ "Mass" -> Quantity[1.5`2.*^14, "Kilograms"], "Radius" -> Quantity[3.`1., "Kilometers"]], "Bebhionn" -> Association[ "Mass" -> Missing["NotAvailable"], "Radius" -> Quantity[3.`2., "Kilometers"]], "Erriapo" -> Association[ "Mass" -> Quantity[ 7.64136525725929`1.9995659225206914*^14, "Kilograms"], "Radius" -> Quantity[5.`1., "Kilometers"]], "Siarnaq" -> Association[ "Mass" -> Quantity[ 3.8955979742890535`1.999565922520683*^16, "Kilograms"], "Radius" -> Quantity[20.`2., "Kilometers"]], "Skoll" -> Association[ "Mass" -> Missing["NotAvailable"], "Radius" -> Quantity[3.`2., "Kilometers"]], "Tarvos" -> Association[ "Mass" -> Quantity[ 2.696952443738576`1.9995659225206786*^15, "Kilograms"], "Radius" -> Quantity[7.5`2., "Kilometers"]], "Greip" -> Association[ "Mass" -> Missing["NotAvailable"], "Radius" -> Quantity[3.`2., "Kilometers"]], "S/2004 S13" -> Association[ "Mass" -> Missing["NotAvailable"], "Radius" -> Quantity[3.`2., "Kilometers"]], "Hyrrokkin" -> Association[ "Mass" -> Missing["NotAvailable"], "Radius" -> Quantity[4.`2., "Kilometers"]], "Mundilfari" -> Association[ "Mass" -> Quantity[ 2.09762967846334`1.9995659225206872*^14, "Kilograms"], "Radius" -> Quantity[3.5`2., "Kilometers"]], "S/2006 S1" -> Association[ "Mass" -> Missing["NotAvailable"], "Radius" -> Quantity[3.`2., "Kilometers"]], "Jarnsaxa" -> Association[ "Mass" -> Missing["NotAvailable"], "Radius" -> Quantity[3.`2., "Kilometers"]], "Narvi" -> Association[ "Mass" -> Quantity[ 3.44610590033262`1.9995659225206872*^14, "Kilograms"], "Radius" -> Quantity[3.5`2., "Kilometers"]], "Bergelmir" -> Association[ "Mass" -> Missing["NotAvailable"], "Radius" -> Quantity[3.`2., "Kilometers"]], "S/2004 S17" -> Association[ "Mass" -> Missing["NotAvailable"], "Radius" -> Quantity[2.`2., "Kilometers"]], "Suttungr" -> Association[ "Mass" -> Quantity[ 2.09762967846334`1.9995659225206872*^14, "Kilograms"], "Radius" -> Quantity[3.5`2., "Kilometers"]], "Hati" -> Association[ "Mass" -> Missing["NotAvailable"], "Radius" -> Quantity[3.`2., "Kilometers"]], "S/2004 S12" -> Association[ "Mass" -> Missing["NotAvailable"], "Radius" -> Quantity[3.`2., "Kilometers"]], "Bestla" -> Association[ "Mass" -> Missing["NotAvailable"], "Radius" -> Quantity[3.`2., "Kilometers"]], "Farbauti" -> Association[ "Mass" -> Missing["NotAvailable"], "Radius" -> Quantity[3.`2., "Kilometers"]], "Thrymr" -> Association[ "Mass" -> Quantity[ 2.09762967846334`1.9995659225206872*^14, "Kilograms"], "Radius" -> Quantity[3.5`2., "Kilometers"]], "S/2007 S3" -> Association[ "Mass" -> Missing["NotAvailable"], "Radius" -> Quantity[2.5`2., "Kilometers"]], "Aegir" -> Association[ "Mass" -> Missing["NotAvailable"], "Radius" -> Quantity[3.`2., "Kilometers"]], "S/2004 S7" -> Association[ "Mass" -> Missing["NotAvailable"], "Radius" -> Quantity[3.`2., "Kilometers"]], "S/2006 S3" -> Association[ "Mass" -> Missing["NotAvailable"], "Radius" -> Quantity[3.`2., "Kilometers"]], "Kari" -> Association[ "Mass" -> Missing["NotAvailable"], "Radius" -> Quantity[3.`2., "Kilometers"]], "Fenrir" -> Association[ "Mass" -> Missing["NotAvailable"], "Radius" -> Quantity[2.`2., "Kilometers"]], "Surtur" -> Association[ "Mass" -> Missing["NotAvailable"], "Radius" -> Quantity[3.`2., "Kilometers"]], "Ymir" -> Association[ "Mass" -> Quantity[ 4.944412813520729`1.9995659225206872*^15, "Kilograms"], "Radius" -> Quantity[9.`1., "Kilometers"]], "Loge" -> Association[ "Mass" -> Missing["NotAvailable"], "Radius" -> Quantity[3.`2., "Kilometers"]], "Fornjot" -> Association[ "Mass" -> Missing["NotAvailable"], "Radius" -> Quantity[3.`2., "Kilometers"]]]], "Uranus" -> Association[ "Radius" -> Quantity[25266.`5., "Kilometers"], "Moons" -> Association[ "Cordelia" -> Association[ "Mass" -> Quantity[ 4.4949207395642923`1.9995659225206872*^16, "Kilograms"], "Radius" -> Quantity[20.1`3., "Kilometers"]], "Ophelia" -> Association[ "Mass" -> Quantity[ 5.3939048874771508`1.9995659225206872*^16, "Kilograms"], "Radius" -> Quantity[21.4`3., "Kilometers"]], "Bianca" -> Association[ "Mass" -> Quantity[ 9.2895028617662042`1.9995659225206872*^16, "Kilograms"], "Radius" -> Quantity[25.7`3., "Kilometers"]], "Cressida" -> Association[ "Mass" -> Quantity[ 3.43112283120074311`2.9956786262173587*^17, "Kilograms"], "Radius" -> Quantity[39.8`3., "Kilometers"]], "Desdemona" -> Association[ "Mass" -> Quantity[ 1.78298522669383596`2.995678626217367*^17, "Kilograms"], "Radius" -> Quantity[32.`3., "Kilometers"]], "Juliet" -> Association[ "Mass" -> Quantity[ 5.57370171705972251`2.9956786262173543*^17, "Kilograms"], "Radius" -> Quantity[46.8`3., "Kilometers"]], "Portia" -> Association[ "Mass" -> Quantity[ 1.681100356597045339`3.9586073148417764*^18, "Kilograms"], "Radius" -> Quantity[67.6`3., "Kilometers"]], "Rosalind" -> Association[ "Mass" -> Quantity[ 2.54712175241976567`2.9956786262173587*^17, "Kilograms"], "Radius" -> Quantity[36.`2., "Kilometers"]], "Cupid" -> Association[ "Mass" -> Missing["NotAvailable"], "Radius" -> Quantity[5.`2., "Kilometers"]], "Belinda" -> Association[ "Mass" -> Quantity[ 3.56597045338767194`2.995678626217367*^17, "Kilograms"], "Radius" -> Quantity[40.3`3., "Kilometers"]], "Perdita" -> Association[ "Mass" -> Missing["NotAvailable"], "Radius" -> Quantity[10.`2., "Kilometers"]], "Puck" -> Association[ "Mass" -> Quantity[ 2.893230649366216176`3.9586073148417764*^18, "Kilograms"], "Radius" -> Quantity[81.`2., "Kilometers"]], "Mab" -> Association[ "Mass" -> Missing["NotAvailable"], "Radius" -> Quantity[5.`2., "Kilometers"]], "Miranda" -> Association[ "Mass" -> Quantity[ 6.5925504180276287794`1.9995659225206872*^19, "Kilograms"], "Radius" -> Quantity[235.8`4., "Kilometers"]], "Ariel" -> Association[ "Mass" -> Quantity[ 1.352971142608851997243`2.9956786262173587*^21, "Kilograms"], "Radius" -> Quantity[578.9`4., "Kilometers"]], "Umbriel" -> Association[ "Mass" -> Quantity[ 1.171676006113092205807`2.9956786262173587*^21, "Kilograms"], "Radius" -> Quantity[584.7`4., "Kilometers"]], "Titania" -> Association[ "Mass" -> Quantity[ 3.525516166731593299572`3.9586073148417764*^21, "Kilograms"], "Radius" -> Quantity[788.9`4., "Kilometers"]], "Oberon" -> Association[ "Mass" -> Quantity[ 3.013095202421263971712`3.9586073148417764*^21, "Kilograms"], "Radius" -> Quantity[761.4`4., "Kilometers"]], "Francisco" -> Association[ "Mass" -> Missing["NotAvailable"], "Radius" -> Quantity[11.`2., "Kilometers"]], "Caliban" -> Association[ "Mass" -> Quantity[ 7.34170387462167751`1.9995659225206872*^17, "Kilograms"], "Radius" -> Quantity[49.`2., "Kilometers"]], "Stephano" -> Association[ "Mass" -> Quantity[ 5.99322765275239`0.9999565727231373*^15, "Kilograms"], "Radius" -> Quantity[10.`2., "Kilometers"]], "Trinculo" -> Association[ "Mass" -> Quantity[ 7.49153456594048`0.9999565727231373*^14, "Kilograms"], "Radius" -> Quantity[5.`1., "Kilometers"]], "Sycorax" -> Association[ "Mass" -> Quantity[ 5.378921818345269844`2.9956786262173627*^18, "Kilograms"], "Radius" -> Quantity[95.`2., "Kilometers"]], "Margaret" -> Association[ "Mass" -> Missing["NotAvailable"], "Radius" -> Quantity[10.`2., "Kilometers"]], "Prospero" -> Association[ "Mass" -> Quantity[ 2.0976296784633363`1.9995659225206872*^16, "Kilograms"], "Radius" -> Quantity[15.`2., "Kilometers"]], "Setebos" -> Association[ "Mass" -> Quantity[ 2.0976296784633363`1.9995659225206872*^16, "Kilograms"], "Radius" -> Quantity[15.`2., "Kilometers"]], "Ferdinand" -> Association[ "Mass" -> Missing["NotAvailable"], "Radius" -> Quantity[10.`2., "Kilometers"]]]], "Neptune" -> Association[ "Radius" -> Quantity[24552.5`5., "Kilometers"], "Moons" -> Association[ "Naiad" -> Association[ "Mass" -> Quantity[ 1.94779898714452669`1.9995659225206872*^17, "Kilograms"], "Radius" -> Quantity[33.`2., "Kilometers"]], "Thalassa" -> Association[ "Mass" -> Quantity[ 3.74576728297024363`1.9995659225206872*^17, "Kilograms"], "Radius" -> Quantity[41.`2., "Kilometers"]], "Despina" -> Association[ "Mass" -> Quantity[ 2.09762967846333643`1.9995659225206872*^18, "Kilograms"], "Radius" -> Quantity[75.`2., "Kilometers"]], "Galatea" -> Association[ "Mass" -> Quantity[ 3.745767282970243625`1.9995659225206872*^18, "Kilograms"], "Radius" -> Quantity[88.`2., "Kilometers"]], "Larissa" -> Association[ "Mass" -> Quantity[ 4.944412813520721585`1.999565922520683*^18, "Kilograms"], "Radius" -> Quantity[97.`2., "Kilometers"]], "Proteus" -> Association[ "Mass" -> Quantity[ 5.0343112283120074311`2.995678626217367*^19, "Kilograms"], "Radius" -> Quantity[210.`3., "Kilometers"]], "Triton" -> Association[ "Mass" -> Quantity[ 2.139432441341284348686`4.6989700043360205*^22, "Kilograms"], "Radius" -> Quantity[1353.4`5., "Kilometers"]], "Nereid" -> Association[ "Mass" -> Quantity[ 3.0865122411674807466`2.9956786262173587*^19, "Kilograms"], "Radius" -> Quantity[170.`3., "Kilometers"]], "Halimede" -> Association[ "Mass" -> Missing["NotAvailable"], "Radius" -> Quantity[30.`2., "Kilometers"]], "Sao" -> Association[ "Mass" -> Missing["NotAvailable"], "Radius" -> Quantity[20.`2., "Kilometers"]], "Laomedeia" -> Association[ "Mass" -> Missing["NotAvailable"], "Radius" -> Quantity[20.`2., "Kilometers"]], "Psamathe" -> Association[ "Mass" -> Missing["NotAvailable"], "Radius" -> Quantity[20.`2., "Kilometers"]], "Neso" -> Association[ "Mass" -> Missing["NotAvailable"], "Radius" -> Quantity[30.`2., "Kilometers"]]]]], TypeSystem`Assoc[ TypeSystem`Atom[String], TypeSystem`Struct[{"Radius", "Moons"}, { TypeSystem`Atom[ Quantity[1, "Kilometers"]], TypeSystem`Assoc[ TypeSystem`Atom[String], TypeSystem`Struct[{"Mass", "Radius"}, { TypeSystem`Atom[ Quantity[1, "Kilograms"]], TypeSystem`Atom[ Quantity[1, "Kilometers"]]}], TypeSystem`AnyLength]}], 8], Association["ID" -> 165317787556689]], MaxItems -> {3, 1}] |
You can specify limits at any depth. Here, the number of each planet’s moons displayed is limited to 1:
✕
Dataset[Dataset[ Association[ "Mercury" -> Association[ "Radius" -> Quantity[2439.7`5., "Kilometers"], "Moons" -> Association[]], "Venus" -> Association[ "Radius" -> Quantity[6051.85`5., "Kilometers"], "Moons" -> Association[]], "Earth" -> Association[ "Radius" -> Quantity[ 6367.4446571000000000001`8.299868708313456, "Kilometers"], "Moons" -> Association[ "Moon" -> Association[ "Mass" -> Quantity[ 7.3459006322855173653772`4.995678626217362*^22, "Kilograms"], "Radius" -> Quantity[1737.5`5., "Kilometers"]]]], "Mars" -> Association[ "Radius" -> Quantity[3385.595`4.298042852900571, "Kilometers"], "Moons" -> Association[ "Phobos" -> Association[ "Mass" -> Quantity[ 1.0724880884600402`3.9586073148417724*^16, "Kilograms"], "Radius" -> Quantity[11.1`3., "Kilometers"]], "Deimos" -> Association[ "Mass" -> Quantity[ 1.468340774924336`1.9995659225206786*^15, "Kilograms"], "Radius" -> Quantity[6.2`2., "Kilometers"]]]], "Jupiter" -> Association[ "Radius" -> Quantity[69173.`5., "Kilometers"], "Moons" -> Association[ "Metis" -> Association[ "Mass" -> Quantity[ 1.19864553055047796`0.9999565727231415*^17, "Kilograms"], "Radius" -> Quantity[21.5`3., "Kilometers"]], "Adrastea" -> Association[ "Mass" -> Quantity[ 7.491534565940487`0.9999565727231415*^15, "Kilograms"], "Radius" -> Quantity[8.2`2., "Kilometers"]], "Amalthea" -> Association[ "Mass" -> Quantity[ 2.067663540199574478`2.995678626217367*^18, "Kilograms"], "Radius" -> Quantity[83.45`4., "Kilometers"]], "Thebe" -> Association[ "Mass" -> Quantity[ 1.49830691318809745`1.9995659225206872*^18, "Kilograms"], "Radius" -> Quantity[49.3`3., "Kilometers"]], "Io" -> Association[ "Mass" -> Quantity[ 8.9297833448203530011087`4.995678626217362*^22, "Kilograms"], "Radius" -> Quantity[1821.6`5., "Kilometers"]], "Europa" -> Association[ "Mass" -> Quantity[ 4.7986859848371340385365`4.995678626217362*^22, "Kilograms"], "Radius" -> Quantity[1560.8`5., "Kilometers"]], "Ganymede" -> Association[ "Mass" -> Quantity[ 1.48150100386563183602529`4.995678626217362*^23, "Kilograms"], "Radius" -> Quantity[2631.2`5., "Kilometers"]], "Callisto" -> Association[ "Mass" -> Quantity[ 1.07567783404752629528633`4.995678626217362*^23, "Kilograms"], "Radius" -> Quantity[2410.3`5., "Kilometers"]], "Themisto" -> Association[ "Mass" -> Quantity[ 6.89221180066526`1.9995659225206872*^14, "Kilograms"], "Radius" -> Quantity[4.`2., "Kilometers"]], "Leda" -> Association[ "Mass" -> Quantity[ 1.0937640466273112`1.9995659225206872*^16, "Kilograms"], "Radius" -> Quantity[10.`2., "Kilometers"]], "Himalia" -> Association[ "Mass" -> Quantity[ 6.742381109346438525`1.999565922520683*^18, "Kilograms"], "Radius" -> Quantity[85.`2., "Kilometers"]], "Lysithea" -> Association[ "Mass" -> Quantity[ 6.2928890353900092`1.999565922520683*^16, "Kilograms"], "Radius" -> Quantity[18.`2., "Kilometers"]], "Elara" -> Association[ "Mass" -> Quantity[ 8.6901800964909652`1.9995659225206872*^17, "Kilograms"], "Radius" -> Quantity[43.`2., "Kilometers"]], "S/2000 J11" -> Association[ "Mass" -> Missing["NotAvailable"], "Radius" -> Quantity[2.`2., "Kilometers"]], "S/2003 J12" -> Association[ "Mass" -> Missing["NotAvailable"], "Radius" -> Quantity[1.`2., "Kilometers"]], "Carpo" -> Association[ "Mass" -> Missing["NotAvailable"], "Radius" -> Quantity[3.`2., "Kilometers"]], "Euporie" -> Association[ "Mass" -> Quantity[ 1.4983069131881`0.9999565727231415*^13, "Kilograms"], "Radius" -> Quantity[1.`2., "Kilometers"]], "S/2003 J3" -> Association[ "Mass" -> Missing["NotAvailable"], "Radius" -> Quantity[2.`2., "Kilometers"]], "S/2003 J18" -> Association[ "Mass" -> Missing["NotAvailable"], "Radius" -> Quantity[2.`2., "Kilometers"]], "Orthosie" -> Association[ "Mass" -> Quantity[ 1.4983069131881`0.9999565727231415*^13, "Kilograms"], "Radius" -> Quantity[1.`2., "Kilometers"]], "Euanthe" -> Association[ "Mass" -> Quantity[ 4.4949207395643`0.9999565727231415*^13, "Kilograms"], "Radius" -> Quantity[1.5`2., "Kilometers"]], "Harpalyke" -> Association[ "Mass" -> Quantity[ 1.19864553055047`0.9999565727231415*^14, "Kilograms"], "Radius" -> Quantity[2.2`2., "Kilometers"]], "Praxidike" -> Association[ "Mass" -> Quantity[ 4.34509004824548`1.9995659225206872*^14, "Kilograms"], "Radius" -> Quantity[3.4`2., "Kilometers"]], "Thyone" -> Association[ "Mass" -> Quantity[ 8.9898414791287`0.9999565727231415*^13, "Kilograms"], "Radius" -> Quantity[2.`2., "Kilometers"]], "S/2003 J16" -> Association[ "Mass" -> Missing["NotAvailable"], "Radius" -> Quantity[2.`2., "Kilometers"]], "Iocaste" -> Association[ "Mass" -> Quantity[ 1.94779898714453`1.9995659225206872*^14, "Kilograms"], "Radius" -> Quantity[2.6`2., "Kilometers"]], "Mneme" -> Association[ "Mass" -> Missing["NotAvailable"], "Radius" -> Quantity[2.`2., "Kilometers"]], "Hermippe" -> Association[ "Mass" -> Quantity[ 8.9898414791287`0.9999565727231415*^13, "Kilograms"], "Radius" -> Quantity[2.`2., "Kilometers"]], "Thelxinoe" -> Association[ "Mass" -> Missing["NotAvailable"], "Radius" -> Quantity[2.`2., "Kilometers"]], "Helike" -> Association[ "Mass" -> Missing["NotAvailable"], "Radius" -> Quantity[4.`2., "Kilometers"]], "Ananke" -> Association[ "Mass" -> Quantity[ 2.9966138263761948`1.9995659225206872*^16, "Kilograms"], "Radius" -> Quantity[14.`2., "Kilometers"]], "S/2003 J15" -> Association[ "Mass" -> Missing["NotAvailable"], "Radius" -> Quantity[2.`2., "Kilometers"]], "Eurydome" -> Association[ "Mass" -> Quantity[ 4.4949207395643`0.9999565727231415*^13, "Kilograms"], "Radius" -> Quantity[1.5`2., "Kilometers"]], "Arche" -> Association[ "Mass" -> Missing["NotAvailable"], "Radius" -> Quantity[1.5`2., "Kilometers"]], "Herse" -> Association[ "Mass" -> Missing["NotAvailable"], "Radius" -> Quantity[2.`2., "Kilometers"]], "Pasithee" -> Association[ "Mass" -> Quantity[ 1.4983069131881`0.9999565727231415*^13, "Kilograms"], "Radius" -> Quantity[1.`2., "Kilometers"]], "S/2003 J10" -> Association[ "Mass" -> Missing["NotAvailable"], "Radius" -> Quantity[2.`2., "Kilometers"]], "Chaldene" -> Association[ "Mass" -> Quantity[ 7.4915345659396`0.9999565727231415*^13, "Kilograms"], "Radius" -> Quantity[1.9`2., "Kilometers"]], "Isonoe" -> Association[ "Mass" -> Quantity[ 7.4915345659396`0.9999565727231415*^13, "Kilograms"], "Radius" -> Quantity[1.9`2., "Kilometers"]], "Erinome" -> Association[ "Mass" -> Quantity[ 4.4949207395643`0.9999565727231415*^13, "Kilograms"], "Radius" -> Quantity[1.6`2., "Kilometers"]], "Kale" -> Association[ "Mass" -> Quantity[ 1.4983069131881`0.9999565727231415*^13, "Kilograms"], "Radius" -> Quantity[1.`2., "Kilometers"]], "Aitne" -> Association[ "Mass" -> Quantity[ 4.4949207395643`0.9999565727231415*^13, "Kilograms"], "Radius" -> Quantity[1.5`2., "Kilometers"]], "Taygete" -> Association[ "Mass" -> Quantity[ 1.6481376045069`1.9995659225206872*^14, "Kilograms"], "Radius" -> Quantity[2.5`2., "Kilometers"]], "S/2003 J9" -> Association[ "Mass" -> Missing["NotAvailable"], "Radius" -> Quantity[1.`2., "Kilometers"]], "Carme" -> Association[ "Mass" -> Quantity[ 1.31851008360552575`1.9995659225206786*^17, "Kilograms"], "Radius" -> Quantity[23.`2., "Kilometers"]], "Sponde" -> Association[ "Mass" -> Quantity[ 1.4983069131881`0.9999565727231415*^13, "Kilograms"], "Radius" -> Quantity[1.`2., "Kilometers"]], "Megaclite" -> Association[ "Mass" -> Quantity[ 2.09762967846334`1.9995659225206872*^14, "Kilograms"], "Radius" -> Quantity[2.7`2., "Kilometers"]], "S/2003 J5" -> Association[ "Mass" -> Missing["NotAvailable"], "Radius" -> Quantity[4.`2., "Kilometers"]], "S/2003 J19" -> Association[ "Mass" -> Missing["NotAvailable"], "Radius" -> Quantity[2.`2., "Kilometers"]], "S/2003 J23" -> Association[ "Mass" -> Missing["NotAvailable"], "Radius" -> Quantity[2.`2., "Kilometers"]], "Kalyke" -> Association[ "Mass" -> Quantity[ 1.94779898714453`1.9995659225206872*^14, "Kilograms"], "Radius" -> Quantity[2.6`2., "Kilometers"]], "Kore" -> Association[ "Mass" -> Missing["NotAvailable"], "Radius" -> Quantity[2.`2., "Kilometers"]], "Pasiphae" -> Association[ "Mass" -> Quantity[ 2.9966138263761949`1.9995659225206786*^17, "Kilograms"], "Radius" -> Quantity[30.`2., "Kilometers"]], "Eukelade" -> Association[ "Mass" -> Missing["NotAvailable"], "Radius" -> Quantity[4.`2., "Kilometers"]], "S/2003 J4" -> Association[ "Mass" -> Missing["NotAvailable"], "Radius" -> Quantity[2.`2., "Kilometers"]], "Sinope" -> Association[ "Mass" -> Quantity[ 7.4915345659404873`1.9995659225206786*^16, "Kilograms"], "Radius" -> Quantity[19.`2., "Kilometers"]], "Hegemone" -> Association[ "Mass" -> Missing["NotAvailable"], "Radius" -> Quantity[3.`2., "Kilometers"]], "Aoede" -> Association[ "Mass" -> Missing["NotAvailable"], "Radius" -> Quantity[4.`2., "Kilometers"]], "Kallichore" -> Association[ "Mass" -> Missing["NotAvailable"], "Radius" -> Quantity[2.`2., "Kilometers"]], "Autonoe" -> Association[ "Mass" -> Quantity[ 8.9898414791287`0.9999565727231415*^13, "Kilograms"], "Radius" -> Quantity[2.`2., "Kilometers"]], "Callirrhoe" -> Association[ "Mass" -> Quantity[ 8.69018009649097`1.9995659225206872*^14, "Kilograms"], "Radius" -> Quantity[4.3`2., "Kilometers"]], "Cyllene" -> Association[ "Mass" -> Missing["NotAvailable"], "Radius" -> Quantity[2.`2., "Kilometers"]], "S/2003 J2" -> Association[ "Mass" -> Missing["NotAvailable"], "Radius" -> Quantity[2.`2., "Kilometers"]]]], "Saturn" -> Association[ "Radius" -> Quantity[57316.`5., "Kilometers"], "Moons" -> Association[ "Tarqeq" -> Association[ "Mass" -> Missing["NotAvailable"], "Radius" -> Quantity[3.5`2., "Kilometers"]], "Pan" -> Association[ "Mass" -> Quantity[ 4.944412813520729`1.9995659225206872*^15, "Kilograms"], "Radius" -> Quantity[12.8`3., "Kilometers"]], "Daphnis" -> Association[ "Mass" -> Missing["NotAvailable"], "Radius" -> Quantity[3.9`2., "Kilometers"]], "Atlas" -> Association[ "Mass" -> Quantity[ 2.097629678463337`1.9995659225206786*^15, "Kilograms"], "Radius" -> Quantity[10.`2., "Kilometers"]], "Prometheus" -> Association[ "Mass" -> Quantity[ 1.86689041383236942`3.9586073148417764*^17, "Kilograms"], "Radius" -> Quantity[46.8`3., "Kilometers"]], "Pandora" -> Association[ "Mass" -> Quantity[ 1.49081537862215657`2.9956786262173587*^17, "Kilograms"], "Radius" -> Quantity[40.6`3., "Kilometers"]], "Epimetheus" -> Association[ "Mass" -> Quantity[ 5.25905726529022205`2.9956786262173543*^17, "Kilograms"], "Radius" -> Quantity[58.3`3., "Kilometers"]], "Janus" -> Association[ "Mass" -> Quantity[ 1.896856552096131371`3.9586073148417764*^18, "Kilograms"], "Radius" -> Quantity[90.4`3., "Kilometers"]], "Aegaeon" -> Association[ "Mass" -> Missing["NotAvailable"], "Radius" -> Quantity[0.25`2., "Kilometers"]], "Mimas" -> Association[ "Mass" -> Quantity[ 3.7907164903658865482`3.9586073148417764*^19, "Kilograms"], "Radius" -> Quantity[198.8`4., "Kilometers"]], "Methone" -> Association[ "Mass" -> Missing["NotAvailable"], "Radius" -> Quantity[1.6`2., "Kilometers"]], "Anthe" -> Association[ "Mass" -> Quantity[5.`1.*^12, "Kilograms"], "Radius" -> Quantity[1.`1., "Kilometers"]], "Pallene" -> Association[ "Mass" -> Missing["NotAvailable"], "Radius" -> Quantity[2.6`2., "Kilometers"]], "Enceladus" -> Association[ "Mass" -> Quantity[ 1.08027928440861826137`3.9586073148417764*^20, "Kilograms"], "Radius" -> Quantity[252.3`4., "Kilometers"]], "Tethys" -> Association[ "Mass" -> Quantity[ 6.17452278924814959099`4.6989700043360205*^20, "Kilograms"], "Radius" -> Quantity[536.3`4., "Kilometers"]], "Calypso" -> Association[ "Mass" -> Quantity[ 3.595936591651433`1.9995659225206872*^15, "Kilograms"], "Radius" -> Quantity[9.5`2., "Kilometers"]], "Telesto" -> Association[ "Mass" -> Quantity[ 7.191873183302868`1.9995659225206872*^15, "Kilograms"], "Radius" -> Quantity[12.`2., "Kilometers"]], "Polydeuces" -> Association[ "Mass" -> Missing["NotAvailable"], "Radius" -> Quantity[1.2`2., "Kilometers"]], "Dione" -> Association[ "Mass" -> Quantity[ 1.095457133439213688532`4.6989700043360205*^21, "Kilograms"], "Radius" -> Quantity[562.5`4., "Kilometers"]], "Helene" -> Association[ "Mass" -> Quantity[ 2.5471217524197656`1.9995659225206872*^16, "Kilograms"], "Radius" -> Quantity[16.`2., "Kilometers"]], "Rhea" -> Association[ "Mass" -> Quantity[ 2.308441461148901741032`4.6989700043360205*^21, "Kilograms"], "Radius" -> Quantity[764.5`4., "Kilometers"]], "Titan" -> Association[ "Mass" -> Quantity[ 1.34520841449162446435527`4.958607314841778*^23, "Kilograms"], "Radius" -> Quantity[2575.5`5., "Kilometers"]], "Hyperion" -> Association[ "Mass" -> Quantity[ 5.543735578795960565`1.9995659225206872*^18, "Kilograms"], "Radius" -> Quantity[133.`4., "Kilometers"]], "Iapetus" -> Association[ "Mass" -> Quantity[ 1.805459830391657427108`4.6989700043360205*^21, "Kilograms"], "Radius" -> Quantity[734.5`4., "Kilometers"]], "Kiviuq" -> Association[ "Mass" -> Quantity[ 3.296275209013815`1.9995659225206872*^15, "Kilograms"], "Radius" -> Quantity[8.`1., "Kilometers"]], "Ijiraq" -> Association[ "Mass" -> Quantity[ 1.198645530550478`1.9995659225206872*^15, "Kilograms"], "Radius" -> Quantity[6.`1., "Kilometers"]], "Phoebe" -> Association[ "Mass" -> Quantity[ 8.287135536843366995`3.9586073148417764*^18, "Kilograms"], "Radius" -> Quantity[106.6`4., "Kilometers"]], "Paaliaq" -> Association[ "Mass" -> Quantity[ 8.240688022534537`1.999565922520683*^15, "Kilograms"], "Radius" -> Quantity[11.`3., "Kilometers"]], "Skathi" -> Association[ "Mass" -> Quantity[ 3.146444517695`1.9995659225206786*^14, "Kilograms"], "Radius" -> Quantity[4.`1., "Kilometers"]], "Albiorix" -> Association[ "Mass" -> Quantity[ 2.0976296784633363`1.9995659225206872*^16, "Kilograms"], "Radius" -> Quantity[16.`2., "Kilometers"]], "S/2007 S2" -> Association[ "Mass" -> Quantity[1.5`2.*^14, "Kilograms"], "Radius" -> Quantity[3.`1., "Kilometers"]], "Bebhionn" -> Association[ "Mass" -> Missing["NotAvailable"], "Radius" -> Quantity[3.`2., "Kilometers"]], "Erriapo" -> Association[ "Mass" -> Quantity[ 7.64136525725929`1.9995659225206914*^14, "Kilograms"], "Radius" -> Quantity[5.`1., "Kilometers"]], "Siarnaq" -> Association[ "Mass" -> Quantity[ 3.8955979742890535`1.999565922520683*^16, "Kilograms"], "Radius" -> Quantity[20.`2., "Kilometers"]], "Skoll" -> Association[ "Mass" -> Missing["NotAvailable"], "Radius" -> Quantity[3.`2., "Kilometers"]], "Tarvos" -> Association[ "Mass" -> Quantity[ 2.696952443738576`1.9995659225206786*^15, "Kilograms"], "Radius" -> Quantity[7.5`2., "Kilometers"]], "Greip" -> Association[ "Mass" -> Missing["NotAvailable"], "Radius" -> Quantity[3.`2., "Kilometers"]], "S/2004 S13" -> Association[ "Mass" -> Missing["NotAvailable"], "Radius" -> Quantity[3.`2., "Kilometers"]], "Hyrrokkin" -> Association[ "Mass" -> Missing["NotAvailable"], "Radius" -> Quantity[4.`2., "Kilometers"]], "Mundilfari" -> Association[ "Mass" -> Quantity[ 2.09762967846334`1.9995659225206872*^14, "Kilograms"], "Radius" -> Quantity[3.5`2., "Kilometers"]], "S/2006 S1" -> Association[ "Mass" -> Missing["NotAvailable"], "Radius" -> Quantity[3.`2., "Kilometers"]], "Jarnsaxa" -> Association[ "Mass" -> Missing["NotAvailable"], "Radius" -> Quantity[3.`2., "Kilometers"]], "Narvi" -> Association[ "Mass" -> Quantity[ 3.44610590033262`1.9995659225206872*^14, "Kilograms"], "Radius" -> Quantity[3.5`2., "Kilometers"]], "Bergelmir" -> Association[ "Mass" -> Missing["NotAvailable"], "Radius" -> Quantity[3.`2., "Kilometers"]], "S/2004 S17" -> Association[ "Mass" -> Missing["NotAvailable"], "Radius" -> Quantity[2.`2., "Kilometers"]], "Suttungr" -> Association[ "Mass" -> Quantity[ 2.09762967846334`1.9995659225206872*^14, "Kilograms"], "Radius" -> Quantity[3.5`2., "Kilometers"]], "Hati" -> Association[ "Mass" -> Missing["NotAvailable"], "Radius" -> Quantity[3.`2., "Kilometers"]], "S/2004 S12" -> Association[ "Mass" -> Missing["NotAvailable"], "Radius" -> Quantity[3.`2., "Kilometers"]], "Bestla" -> Association[ "Mass" -> Missing["NotAvailable"], "Radius" -> Quantity[3.`2., "Kilometers"]], "Farbauti" -> Association[ "Mass" -> Missing["NotAvailable"], "Radius" -> Quantity[3.`2., "Kilometers"]], "Thrymr" -> Association[ "Mass" -> Quantity[ 2.09762967846334`1.9995659225206872*^14, "Kilograms"], "Radius" -> Quantity[3.5`2., "Kilometers"]], "S/2007 S3" -> Association[ "Mass" -> Missing["NotAvailable"], "Radius" -> Quantity[2.5`2., "Kilometers"]], "Aegir" -> Association[ "Mass" -> Missing["NotAvailable"], "Radius" -> Quantity[3.`2., "Kilometers"]], "S/2004 S7" -> Association[ "Mass" -> Missing["NotAvailable"], "Radius" -> Quantity[3.`2., "Kilometers"]], "S/2006 S3" -> Association[ "Mass" -> Missing["NotAvailable"], "Radius" -> Quantity[3.`2., "Kilometers"]], "Kari" -> Association[ "Mass" -> Missing["NotAvailable"], "Radius" -> Quantity[3.`2., "Kilometers"]], "Fenrir" -> Association[ "Mass" -> Missing["NotAvailable"], "Radius" -> Quantity[2.`2., "Kilometers"]], "Surtur" -> Association[ "Mass" -> Missing["NotAvailable"], "Radius" -> Quantity[3.`2., "Kilometers"]], "Ymir" -> Association[ "Mass" -> Quantity[ 4.944412813520729`1.9995659225206872*^15, "Kilograms"], "Radius" -> Quantity[9.`1., "Kilometers"]], "Loge" -> Association[ "Mass" -> Missing["NotAvailable"], "Radius" -> Quantity[3.`2., "Kilometers"]], "Fornjot" -> Association[ "Mass" -> Missing["NotAvailable"], "Radius" -> Quantity[3.`2., "Kilometers"]]]], "Uranus" -> Association[ "Radius" -> Quantity[25266.`5., "Kilometers"], "Moons" -> Association[ "Cordelia" -> Association[ "Mass" -> Quantity[ 4.4949207395642923`1.9995659225206872*^16, "Kilograms"], "Radius" -> Quantity[20.1`3., "Kilometers"]], "Ophelia" -> Association[ "Mass" -> Quantity[ 5.3939048874771508`1.9995659225206872*^16, "Kilograms"], "Radius" -> Quantity[21.4`3., "Kilometers"]], "Bianca" -> Association[ "Mass" -> Quantity[ 9.2895028617662042`1.9995659225206872*^16, "Kilograms"], "Radius" -> Quantity[25.7`3., "Kilometers"]], "Cressida" -> Association[ "Mass" -> Quantity[ 3.43112283120074311`2.9956786262173587*^17, "Kilograms"], "Radius" -> Quantity[39.8`3., "Kilometers"]], "Desdemona" -> Association[ "Mass" -> Quantity[ 1.78298522669383596`2.995678626217367*^17, "Kilograms"], "Radius" -> Quantity[32.`3., "Kilometers"]], "Juliet" -> Association[ "Mass" -> Quantity[ 5.57370171705972251`2.9956786262173543*^17, "Kilograms"], "Radius" -> Quantity[46.8`3., "Kilometers"]], "Portia" -> Association[ "Mass" -> Quantity[ 1.681100356597045339`3.9586073148417764*^18, "Kilograms"], "Radius" -> Quantity[67.6`3., "Kilometers"]], "Rosalind" -> Association[ "Mass" -> Quantity[ 2.54712175241976567`2.9956786262173587*^17, "Kilograms"], "Radius" -> Quantity[36.`2., "Kilometers"]], "Cupid" -> Association[ "Mass" -> Missing["NotAvailable"], "Radius" -> Quantity[5.`2., "Kilometers"]], "Belinda" -> Association[ "Mass" -> Quantity[ 3.56597045338767194`2.995678626217367*^17, "Kilograms"], "Radius" -> Quantity[40.3`3., "Kilometers"]], "Perdita" -> Association[ "Mass" -> Missing["NotAvailable"], "Radius" -> Quantity[10.`2., "Kilometers"]], "Puck" -> Association[ "Mass" -> Quantity[ 2.893230649366216176`3.9586073148417764*^18, "Kilograms"], "Radius" -> Quantity[81.`2., "Kilometers"]], "Mab" -> Association[ "Mass" -> Missing["NotAvailable"], "Radius" -> Quantity[5.`2., "Kilometers"]], "Miranda" -> Association[ "Mass" -> Quantity[ 6.5925504180276287794`1.9995659225206872*^19, "Kilograms"], "Radius" -> Quantity[235.8`4., "Kilometers"]], "Ariel" -> Association[ "Mass" -> Quantity[ 1.352971142608851997243`2.9956786262173587*^21, "Kilograms"], "Radius" -> Quantity[578.9`4., "Kilometers"]], "Umbriel" -> Association[ "Mass" -> Quantity[ 1.171676006113092205807`2.9956786262173587*^21, "Kilograms"], "Radius" -> Quantity[584.7`4., "Kilometers"]], "Titania" -> Association[ "Mass" -> Quantity[ 3.525516166731593299572`3.9586073148417764*^21, "Kilograms"], "Radius" -> Quantity[788.9`4., "Kilometers"]], "Oberon" -> Association[ "Mass" -> Quantity[ 3.013095202421263971712`3.9586073148417764*^21, "Kilograms"], "Radius" -> Quantity[761.4`4., "Kilometers"]], "Francisco" -> Association[ "Mass" -> Missing["NotAvailable"], "Radius" -> Quantity[11.`2., "Kilometers"]], "Caliban" -> Association[ "Mass" -> Quantity[ 7.34170387462167751`1.9995659225206872*^17, "Kilograms"], "Radius" -> Quantity[49.`2., "Kilometers"]], "Stephano" -> Association[ "Mass" -> Quantity[ 5.99322765275239`0.9999565727231373*^15, "Kilograms"], "Radius" -> Quantity[10.`2., "Kilometers"]], "Trinculo" -> Association[ "Mass" -> Quantity[ 7.49153456594048`0.9999565727231373*^14, "Kilograms"], "Radius" -> Quantity[5.`1., "Kilometers"]], "Sycorax" -> Association[ "Mass" -> Quantity[ 5.378921818345269844`2.9956786262173627*^18, "Kilograms"], "Radius" -> Quantity[95.`2., "Kilometers"]], "Margaret" -> Association[ "Mass" -> Missing["NotAvailable"], "Radius" -> Quantity[10.`2., "Kilometers"]], "Prospero" -> Association[ "Mass" -> Quantity[ 2.0976296784633363`1.9995659225206872*^16, "Kilograms"], "Radius" -> Quantity[15.`2., "Kilometers"]], "Setebos" -> Association[ "Mass" -> Quantity[ 2.0976296784633363`1.9995659225206872*^16, "Kilograms"], "Radius" -> Quantity[15.`2., "Kilometers"]], "Ferdinand" -> Association[ "Mass" -> Missing["NotAvailable"], "Radius" -> Quantity[10.`2., "Kilometers"]]]], "Neptune" -> Association[ "Radius" -> Quantity[24552.5`5., "Kilometers"], "Moons" -> Association[ "Naiad" -> Association[ "Mass" -> Quantity[ 1.94779898714452669`1.9995659225206872*^17, "Kilograms"], "Radius" -> Quantity[33.`2., "Kilometers"]], "Thalassa" -> Association[ "Mass" -> Quantity[ 3.74576728297024363`1.9995659225206872*^17, "Kilograms"], "Radius" -> Quantity[41.`2., "Kilometers"]], "Despina" -> Association[ "Mass" -> Quantity[ 2.09762967846333643`1.9995659225206872*^18, "Kilograms"], "Radius" -> Quantity[75.`2., "Kilometers"]], "Galatea" -> Association[ "Mass" -> Quantity[ 3.745767282970243625`1.9995659225206872*^18, "Kilograms"], "Radius" -> Quantity[88.`2., "Kilometers"]], "Larissa" -> Association[ "Mass" -> Quantity[ 4.944412813520721585`1.999565922520683*^18, "Kilograms"], "Radius" -> Quantity[97.`2., "Kilometers"]], "Proteus" -> Association[ "Mass" -> Quantity[ 5.0343112283120074311`2.995678626217367*^19, "Kilograms"], "Radius" -> Quantity[210.`3., "Kilometers"]], "Triton" -> Association[ "Mass" -> Quantity[ 2.139432441341284348686`4.6989700043360205*^22, "Kilograms"], "Radius" -> Quantity[1353.4`5., "Kilometers"]], "Nereid" -> Association[ "Mass" -> Quantity[ 3.0865122411674807466`2.9956786262173587*^19, "Kilograms"], "Radius" -> Quantity[170.`3., "Kilometers"]], "Halimede" -> Association[ "Mass" -> Missing["NotAvailable"], "Radius" -> Quantity[30.`2., "Kilometers"]], "Sao" -> Association[ "Mass" -> Missing["NotAvailable"], "Radius" -> Quantity[20.`2., "Kilometers"]], "Laomedeia" -> Association[ "Mass" -> Missing["NotAvailable"], "Radius" -> Quantity[20.`2., "Kilometers"]], "Psamathe" -> Association[ "Mass" -> Missing["NotAvailable"], "Radius" -> Quantity[20.`2., "Kilometers"]], "Neso" -> Association[ "Mass" -> Missing["NotAvailable"], "Radius" -> Quantity[30.`2., "Kilometers"]]]]], TypeSystem`Assoc[ TypeSystem`Atom[String], TypeSystem`Struct[{"Radius", "Moons"}, { TypeSystem`Atom[ Quantity[1, "Kilometers"]], TypeSystem`Assoc[ TypeSystem`Atom[String], TypeSystem`Struct[{"Mass", "Radius"}, { TypeSystem`Atom[ Quantity[1, "Kilograms"]], TypeSystem`Atom[ Quantity[1, "Kilometers"]]}], TypeSystem`AnyLength]}], 8], Association["ID" -> 165317787556689]], MaxItems -> {Automatic, Automatic, 1}] |
When you click a Dataset header, you drill down to that level in the dataset:
Specify the initial drill-down position directly with DatasetDisplayPanel, giving the path to drill down to:
✕
Dataset[Dataset[ Association[ "Mercury" -> Association[ "Radius" -> Quantity[2439.7`5., "Kilometers"], "Moons" -> Association[]], "Venus" -> Association[ "Radius" -> Quantity[6051.85`5., "Kilometers"], "Moons" -> Association[]], "Earth" -> Association[ "Radius" -> Quantity[ 6367.4446571000000000001`8.299868708313456, "Kilometers"], "Moons" -> Association[ "Moon" -> Association[ "Mass" -> Quantity[ 7.3459006322855173653772`4.995678626217362*^22, "Kilograms"], "Radius" -> Quantity[1737.5`5., "Kilometers"]]]], "Mars" -> Association[ "Radius" -> Quantity[3385.595`4.298042852900571, "Kilometers"], "Moons" -> Association[ "Phobos" -> Association[ "Mass" -> Quantity[ 1.0724880884600402`3.9586073148417724*^16, "Kilograms"], "Radius" -> Quantity[11.1`3., "Kilometers"]], "Deimos" -> Association[ "Mass" -> Quantity[ 1.468340774924336`1.9995659225206786*^15, "Kilograms"], "Radius" -> Quantity[6.2`2., "Kilometers"]]]], "Jupiter" -> Association[ "Radius" -> Quantity[69173.`5., "Kilometers"], "Moons" -> Association[ "Metis" -> Association[ "Mass" -> Quantity[ 1.19864553055047796`0.9999565727231415*^17, "Kilograms"], "Radius" -> Quantity[21.5`3., "Kilometers"]], "Adrastea" -> Association[ "Mass" -> Quantity[ 7.491534565940487`0.9999565727231415*^15, "Kilograms"], "Radius" -> Quantity[8.2`2., "Kilometers"]], "Amalthea" -> Association[ "Mass" -> Quantity[ 2.067663540199574478`2.995678626217367*^18, "Kilograms"], "Radius" -> Quantity[83.45`4., "Kilometers"]], "Thebe" -> Association[ "Mass" -> Quantity[ 1.49830691318809745`1.9995659225206872*^18, "Kilograms"], "Radius" -> Quantity[49.3`3., "Kilometers"]], "Io" -> Association[ "Mass" -> Quantity[ 8.9297833448203530011087`4.995678626217362*^22, "Kilograms"], "Radius" -> Quantity[1821.6`5., "Kilometers"]], "Europa" -> Association[ "Mass" -> Quantity[ 4.7986859848371340385365`4.995678626217362*^22, "Kilograms"], "Radius" -> Quantity[1560.8`5., "Kilometers"]], "Ganymede" -> Association[ "Mass" -> Quantity[ 1.48150100386563183602529`4.995678626217362*^23, "Kilograms"], "Radius" -> Quantity[2631.2`5., "Kilometers"]], "Callisto" -> Association[ "Mass" -> Quantity[ 1.07567783404752629528633`4.995678626217362*^23, "Kilograms"], "Radius" -> Quantity[2410.3`5., "Kilometers"]], "Themisto" -> Association[ "Mass" -> Quantity[ 6.89221180066526`1.9995659225206872*^14, "Kilograms"], "Radius" -> Quantity[4.`2., "Kilometers"]], "Leda" -> Association[ "Mass" -> Quantity[ 1.0937640466273112`1.9995659225206872*^16, "Kilograms"], "Radius" -> Quantity[10.`2., "Kilometers"]], "Himalia" -> Association[ "Mass" -> Quantity[ 6.742381109346438525`1.999565922520683*^18, "Kilograms"], "Radius" -> Quantity[85.`2., "Kilometers"]], "Lysithea" -> Association[ "Mass" -> Quantity[ 6.2928890353900092`1.999565922520683*^16, "Kilograms"], "Radius" -> Quantity[18.`2., "Kilometers"]], "Elara" -> Association[ "Mass" -> Quantity[ 8.6901800964909652`1.9995659225206872*^17, "Kilograms"], "Radius" -> Quantity[43.`2., "Kilometers"]], "S/2000 J11" -> Association[ "Mass" -> Missing["NotAvailable"], "Radius" -> Quantity[2.`2., "Kilometers"]], "S/2003 J12" -> Association[ "Mass" -> Missing["NotAvailable"], "Radius" -> Quantity[1.`2., "Kilometers"]], "Carpo" -> Association[ "Mass" -> Missing["NotAvailable"], "Radius" -> Quantity[3.`2., "Kilometers"]], "Euporie" -> Association[ "Mass" -> Quantity[ 1.4983069131881`0.9999565727231415*^13, "Kilograms"], "Radius" -> Quantity[1.`2., "Kilometers"]], "S/2003 J3" -> Association[ "Mass" -> Missing["NotAvailable"], "Radius" -> Quantity[2.`2., "Kilometers"]], "S/2003 J18" -> Association[ "Mass" -> Missing["NotAvailable"], "Radius" -> Quantity[2.`2., "Kilometers"]], "Orthosie" -> Association[ "Mass" -> Quantity[ 1.4983069131881`0.9999565727231415*^13, "Kilograms"], "Radius" -> Quantity[1.`2., "Kilometers"]], "Euanthe" -> Association[ "Mass" -> Quantity[ 4.4949207395643`0.9999565727231415*^13, "Kilograms"], "Radius" -> Quantity[1.5`2., "Kilometers"]], "Harpalyke" -> Association[ "Mass" -> Quantity[ 1.19864553055047`0.9999565727231415*^14, "Kilograms"], "Radius" -> Quantity[2.2`2., "Kilometers"]], "Praxidike" -> Association[ "Mass" -> Quantity[ 4.34509004824548`1.9995659225206872*^14, "Kilograms"], "Radius" -> Quantity[3.4`2., "Kilometers"]], "Thyone" -> Association[ "Mass" -> Quantity[ 8.9898414791287`0.9999565727231415*^13, "Kilograms"], "Radius" -> Quantity[2.`2., "Kilometers"]], "S/2003 J16" -> Association[ "Mass" -> Missing["NotAvailable"], "Radius" -> Quantity[2.`2., "Kilometers"]], "Iocaste" -> Association[ "Mass" -> Quantity[ 1.94779898714453`1.9995659225206872*^14, "Kilograms"], "Radius" -> Quantity[2.6`2., "Kilometers"]], "Mneme" -> Association[ "Mass" -> Missing["NotAvailable"], "Radius" -> Quantity[2.`2., "Kilometers"]], "Hermippe" -> Association[ "Mass" -> Quantity[ 8.9898414791287`0.9999565727231415*^13, "Kilograms"], "Radius" -> Quantity[2.`2., "Kilometers"]], "Thelxinoe" -> Association[ "Mass" -> Missing["NotAvailable"], "Radius" -> Quantity[2.`2., "Kilometers"]], "Helike" -> Association[ "Mass" -> Missing["NotAvailable"], "Radius" -> Quantity[4.`2., "Kilometers"]], "Ananke" -> Association[ "Mass" -> Quantity[ 2.9966138263761948`1.9995659225206872*^16, "Kilograms"], "Radius" -> Quantity[14.`2., "Kilometers"]], "S/2003 J15" -> Association[ "Mass" -> Missing["NotAvailable"], "Radius" -> Quantity[2.`2., "Kilometers"]], "Eurydome" -> Association[ "Mass" -> Quantity[ 4.4949207395643`0.9999565727231415*^13, "Kilograms"], "Radius" -> Quantity[1.5`2., "Kilometers"]], "Arche" -> Association[ "Mass" -> Missing["NotAvailable"], "Radius" -> Quantity[1.5`2., "Kilometers"]], "Herse" -> Association[ "Mass" -> Missing["NotAvailable"], "Radius" -> Quantity[2.`2., "Kilometers"]], "Pasithee" -> Association[ "Mass" -> Quantity[ 1.4983069131881`0.9999565727231415*^13, "Kilograms"], "Radius" -> Quantity[1.`2., "Kilometers"]], "S/2003 J10" -> Association[ "Mass" -> Missing["NotAvailable"], "Radius" -> Quantity[2.`2., "Kilometers"]], "Chaldene" -> Association[ "Mass" -> Quantity[ 7.4915345659396`0.9999565727231415*^13, "Kilograms"], "Radius" -> Quantity[1.9`2., "Kilometers"]], "Isonoe" -> Association[ "Mass" -> Quantity[ 7.4915345659396`0.9999565727231415*^13, "Kilograms"], "Radius" -> Quantity[1.9`2., "Kilometers"]], "Erinome" -> Association[ "Mass" -> Quantity[ 4.4949207395643`0.9999565727231415*^13, "Kilograms"], "Radius" -> Quantity[1.6`2., "Kilometers"]], "Kale" -> Association[ "Mass" -> Quantity[ 1.4983069131881`0.9999565727231415*^13, "Kilograms"], "Radius" -> Quantity[1.`2., "Kilometers"]], "Aitne" -> Association[ "Mass" -> Quantity[ 4.4949207395643`0.9999565727231415*^13, "Kilograms"], "Radius" -> Quantity[1.5`2., "Kilometers"]], "Taygete" -> Association[ "Mass" -> Quantity[ 1.6481376045069`1.9995659225206872*^14, "Kilograms"], "Radius" -> Quantity[2.5`2., "Kilometers"]], "S/2003 J9" -> Association[ "Mass" -> Missing["NotAvailable"], "Radius" -> Quantity[1.`2., "Kilometers"]], "Carme" -> Association[ "Mass" -> Quantity[ 1.31851008360552575`1.9995659225206786*^17, "Kilograms"], "Radius" -> Quantity[23.`2., "Kilometers"]], "Sponde" -> Association[ "Mass" -> Quantity[ 1.4983069131881`0.9999565727231415*^13, "Kilograms"], "Radius" -> Quantity[1.`2., "Kilometers"]], "Megaclite" -> Association[ "Mass" -> Quantity[ 2.09762967846334`1.9995659225206872*^14, "Kilograms"], "Radius" -> Quantity[2.7`2., "Kilometers"]], "S/2003 J5" -> Association[ "Mass" -> Missing["NotAvailable"], "Radius" -> Quantity[4.`2., "Kilometers"]], "S/2003 J19" -> Association[ "Mass" -> Missing["NotAvailable"], "Radius" -> Quantity[2.`2., "Kilometers"]], "S/2003 J23" -> Association[ "Mass" -> Missing["NotAvailable"], "Radius" -> Quantity[2.`2., "Kilometers"]], "Kalyke" -> Association[ "Mass" -> Quantity[ 1.94779898714453`1.9995659225206872*^14, "Kilograms"], "Radius" -> Quantity[2.6`2., "Kilometers"]], "Kore" -> Association[ "Mass" -> Missing["NotAvailable"], "Radius" -> Quantity[2.`2., "Kilometers"]], "Pasiphae" -> Association[ "Mass" -> Quantity[ 2.9966138263761949`1.9995659225206786*^17, "Kilograms"], "Radius" -> Quantity[30.`2., "Kilometers"]], "Eukelade" -> Association[ "Mass" -> Missing["NotAvailable"], "Radius" -> Quantity[4.`2., "Kilometers"]], "S/2003 J4" -> Association[ "Mass" -> Missing["NotAvailable"], "Radius" -> Quantity[2.`2., "Kilometers"]], "Sinope" -> Association[ "Mass" -> Quantity[ 7.4915345659404873`1.9995659225206786*^16, "Kilograms"], "Radius" -> Quantity[19.`2., "Kilometers"]], "Hegemone" -> Association[ "Mass" -> Missing["NotAvailable"], "Radius" -> Quantity[3.`2., "Kilometers"]], "Aoede" -> Association[ "Mass" -> Missing["NotAvailable"], "Radius" -> Quantity[4.`2., "Kilometers"]], "Kallichore" -> Association[ "Mass" -> Missing["NotAvailable"], "Radius" -> Quantity[2.`2., "Kilometers"]], "Autonoe" -> Association[ "Mass" -> Quantity[ 8.9898414791287`0.9999565727231415*^13, "Kilograms"], "Radius" -> Quantity[2.`2., "Kilometers"]], "Callirrhoe" -> Association[ "Mass" -> Quantity[ 8.69018009649097`1.9995659225206872*^14, "Kilograms"], "Radius" -> Quantity[4.3`2., "Kilometers"]], "Cyllene" -> Association[ "Mass" -> Missing["NotAvailable"], "Radius" -> Quantity[2.`2., "Kilometers"]], "S/2003 J2" -> Association[ "Mass" -> Missing["NotAvailable"], "Radius" -> Quantity[2.`2., "Kilometers"]]]], "Saturn" -> Association[ "Radius" -> Quantity[57316.`5., "Kilometers"], "Moons" -> Association[ "Tarqeq" -> Association[ "Mass" -> Missing["NotAvailable"], "Radius" -> Quantity[3.5`2., "Kilometers"]], "Pan" -> Association[ "Mass" -> Quantity[ 4.944412813520729`1.9995659225206872*^15, "Kilograms"], "Radius" -> Quantity[12.8`3., "Kilometers"]], "Daphnis" -> Association[ "Mass" -> Missing["NotAvailable"], "Radius" -> Quantity[3.9`2., "Kilometers"]], "Atlas" -> Association[ "Mass" -> Quantity[ 2.097629678463337`1.9995659225206786*^15, "Kilograms"], "Radius" -> Quantity[10.`2., "Kilometers"]], "Prometheus" -> Association[ "Mass" -> Quantity[ 1.86689041383236942`3.9586073148417764*^17, "Kilograms"], "Radius" -> Quantity[46.8`3., "Kilometers"]], "Pandora" -> Association[ "Mass" -> Quantity[ 1.49081537862215657`2.9956786262173587*^17, "Kilograms"], "Radius" -> Quantity[40.6`3., "Kilometers"]], "Epimetheus" -> Association[ "Mass" -> Quantity[ 5.25905726529022205`2.9956786262173543*^17, "Kilograms"], "Radius" -> Quantity[58.3`3., "Kilometers"]], "Janus" -> Association[ "Mass" -> Quantity[ 1.896856552096131371`3.9586073148417764*^18, "Kilograms"], "Radius" -> Quantity[90.4`3., "Kilometers"]], "Aegaeon" -> Association[ "Mass" -> Missing["NotAvailable"], "Radius" -> Quantity[0.25`2., "Kilometers"]], "Mimas" -> Association[ "Mass" -> Quantity[ 3.7907164903658865482`3.9586073148417764*^19, "Kilograms"], "Radius" -> Quantity[198.8`4., "Kilometers"]], "Methone" -> Association[ "Mass" -> Missing["NotAvailable"], "Radius" -> Quantity[1.6`2., "Kilometers"]], "Anthe" -> Association[ "Mass" -> Quantity[5.`1.*^12, "Kilograms"], "Radius" -> Quantity[1.`1., "Kilometers"]], "Pallene" -> Association[ "Mass" -> Missing["NotAvailable"], "Radius" -> Quantity[2.6`2., "Kilometers"]], "Enceladus" -> Association[ "Mass" -> Quantity[ 1.08027928440861826137`3.9586073148417764*^20, "Kilograms"], "Radius" -> Quantity[252.3`4., "Kilometers"]], "Tethys" -> Association[ "Mass" -> Quantity[ 6.17452278924814959099`4.6989700043360205*^20, "Kilograms"], "Radius" -> Quantity[536.3`4., "Kilometers"]], "Calypso" -> Association[ "Mass" -> Quantity[ 3.595936591651433`1.9995659225206872*^15, "Kilograms"], "Radius" -> Quantity[9.5`2., "Kilometers"]], "Telesto" -> Association[ "Mass" -> Quantity[ 7.191873183302868`1.9995659225206872*^15, "Kilograms"], "Radius" -> Quantity[12.`2., "Kilometers"]], "Polydeuces" -> Association[ "Mass" -> Missing["NotAvailable"], "Radius" -> Quantity[1.2`2., "Kilometers"]], "Dione" -> Association[ "Mass" -> Quantity[ 1.095457133439213688532`4.6989700043360205*^21, "Kilograms"], "Radius" -> Quantity[562.5`4., "Kilometers"]], "Helene" -> Association[ "Mass" -> Quantity[ 2.5471217524197656`1.9995659225206872*^16, "Kilograms"], "Radius" -> Quantity[16.`2., "Kilometers"]], "Rhea" -> Association[ "Mass" -> Quantity[ 2.308441461148901741032`4.6989700043360205*^21, "Kilograms"], "Radius" -> Quantity[764.5`4., "Kilometers"]], "Titan" -> Association[ "Mass" -> Quantity[ 1.34520841449162446435527`4.958607314841778*^23, "Kilograms"], "Radius" -> Quantity[2575.5`5., "Kilometers"]], "Hyperion" -> Association[ "Mass" -> Quantity[ 5.543735578795960565`1.9995659225206872*^18, "Kilograms"], "Radius" -> Quantity[133.`4., "Kilometers"]], "Iapetus" -> Association[ "Mass" -> Quantity[ 1.805459830391657427108`4.6989700043360205*^21, "Kilograms"], "Radius" -> Quantity[734.5`4., "Kilometers"]], "Kiviuq" -> Association[ "Mass" -> Quantity[ 3.296275209013815`1.9995659225206872*^15, "Kilograms"], "Radius" -> Quantity[8.`1., "Kilometers"]], "Ijiraq" -> Association[ "Mass" -> Quantity[ 1.198645530550478`1.9995659225206872*^15, "Kilograms"], "Radius" -> Quantity[6.`1., "Kilometers"]], "Phoebe" -> Association[ "Mass" -> Quantity[ 8.287135536843366995`3.9586073148417764*^18, "Kilograms"], "Radius" -> Quantity[106.6`4., "Kilometers"]], "Paaliaq" -> Association[ "Mass" -> Quantity[ 8.240688022534537`1.999565922520683*^15, "Kilograms"], "Radius" -> Quantity[11.`3., "Kilometers"]], "Skathi" -> Association[ "Mass" -> Quantity[ 3.146444517695`1.9995659225206786*^14, "Kilograms"], "Radius" -> Quantity[4.`1., "Kilometers"]], "Albiorix" -> Association[ "Mass" -> Quantity[ 2.0976296784633363`1.9995659225206872*^16, "Kilograms"], "Radius" -> Quantity[16.`2., "Kilometers"]], "S/2007 S2" -> Association[ "Mass" -> Quantity[1.5`2.*^14, "Kilograms"], "Radius" -> Quantity[3.`1., "Kilometers"]], "Bebhionn" -> Association[ "Mass" -> Missing["NotAvailable"], "Radius" -> Quantity[3.`2., "Kilometers"]], "Erriapo" -> Association[ "Mass" -> Quantity[ 7.64136525725929`1.9995659225206914*^14, "Kilograms"], "Radius" -> Quantity[5.`1., "Kilometers"]], "Siarnaq" -> Association[ "Mass" -> Quantity[ 3.8955979742890535`1.999565922520683*^16, "Kilograms"], "Radius" -> Quantity[20.`2., "Kilometers"]], "Skoll" -> Association[ "Mass" -> Missing["NotAvailable"], "Radius" -> Quantity[3.`2., "Kilometers"]], "Tarvos" -> Association[ "Mass" -> Quantity[ 2.696952443738576`1.9995659225206786*^15, "Kilograms"], "Radius" -> Quantity[7.5`2., "Kilometers"]], "Greip" -> Association[ "Mass" -> Missing["NotAvailable"], "Radius" -> Quantity[3.`2., "Kilometers"]], "S/2004 S13" -> Association[ "Mass" -> Missing["NotAvailable"], "Radius" -> Quantity[3.`2., "Kilometers"]], "Hyrrokkin" -> Association[ "Mass" -> Missing["NotAvailable"], "Radius" -> Quantity[4.`2., "Kilometers"]], "Mundilfari" -> Association[ "Mass" -> Quantity[ 2.09762967846334`1.9995659225206872*^14, "Kilograms"], "Radius" -> Quantity[3.5`2., "Kilometers"]], "S/2006 S1" -> Association[ "Mass" -> Missing["NotAvailable"], "Radius" -> Quantity[3.`2., "Kilometers"]], "Jarnsaxa" -> Association[ "Mass" -> Missing["NotAvailable"], "Radius" -> Quantity[3.`2., "Kilometers"]], "Narvi" -> Association[ "Mass" -> Quantity[ 3.44610590033262`1.9995659225206872*^14, "Kilograms"], "Radius" -> Quantity[3.5`2., "Kilometers"]], "Bergelmir" -> Association[ "Mass" -> Missing["NotAvailable"], "Radius" -> Quantity[3.`2., "Kilometers"]], "S/2004 S17" -> Association[ "Mass" -> Missing["NotAvailable"], "Radius" -> Quantity[2.`2., "Kilometers"]], "Suttungr" -> Association[ "Mass" -> Quantity[ 2.09762967846334`1.9995659225206872*^14, "Kilograms"], "Radius" -> Quantity[3.5`2., "Kilometers"]], "Hati" -> Association[ "Mass" -> Missing["NotAvailable"], "Radius" -> Quantity[3.`2., "Kilometers"]], "S/2004 S12" -> Association[ "Mass" -> Missing["NotAvailable"], "Radius" -> Quantity[3.`2., "Kilometers"]], "Bestla" -> Association[ "Mass" -> Missing["NotAvailable"], "Radius" -> Quantity[3.`2., "Kilometers"]], "Farbauti" -> Association[ "Mass" -> Missing["NotAvailable"], "Radius" -> Quantity[3.`2., "Kilometers"]], "Thrymr" -> Association[ "Mass" -> Quantity[ 2.09762967846334`1.9995659225206872*^14, "Kilograms"], "Radius" -> Quantity[3.5`2., "Kilometers"]], "S/2007 S3" -> Association[ "Mass" -> Missing["NotAvailable"], "Radius" -> Quantity[2.5`2., "Kilometers"]], "Aegir" -> Association[ "Mass" -> Missing["NotAvailable"], "Radius" -> Quantity[3.`2., "Kilometers"]], "S/2004 S7" -> Association[ "Mass" -> Missing["NotAvailable"], "Radius" -> Quantity[3.`2., "Kilometers"]], "S/2006 S3" -> Association[ "Mass" -> Missing["NotAvailable"], "Radius" -> Quantity[3.`2., "Kilometers"]], "Kari" -> Association[ "Mass" -> Missing["NotAvailable"], "Radius" -> Quantity[3.`2., "Kilometers"]], "Fenrir" -> Association[ "Mass" -> Missing["NotAvailable"], "Radius" -> Quantity[2.`2., "Kilometers"]], "Surtur" -> Association[ "Mass" -> Missing["NotAvailable"], "Radius" -> Quantity[3.`2., "Kilometers"]], "Ymir" -> Association[ "Mass" -> Quantity[ 4.944412813520729`1.9995659225206872*^15, "Kilograms"], "Radius" -> Quantity[9.`1., "Kilometers"]], "Loge" -> Association[ "Mass" -> Missing["NotAvailable"], "Radius" -> Quantity[3.`2., "Kilometers"]], "Fornjot" -> Association[ "Mass" -> Missing["NotAvailable"], "Radius" -> Quantity[3.`2., "Kilometers"]]]], "Uranus" -> Association[ "Radius" -> Quantity[25266.`5., "Kilometers"], "Moons" -> Association[ "Cordelia" -> Association[ "Mass" -> Quantity[ 4.4949207395642923`1.9995659225206872*^16, "Kilograms"], "Radius" -> Quantity[20.1`3., "Kilometers"]], "Ophelia" -> Association[ "Mass" -> Quantity[ 5.3939048874771508`1.9995659225206872*^16, "Kilograms"], "Radius" -> Quantity[21.4`3., "Kilometers"]], "Bianca" -> Association[ "Mass" -> Quantity[ 9.2895028617662042`1.9995659225206872*^16, "Kilograms"], "Radius" -> Quantity[25.7`3., "Kilometers"]], "Cressida" -> Association[ "Mass" -> Quantity[ 3.43112283120074311`2.9956786262173587*^17, "Kilograms"], "Radius" -> Quantity[39.8`3., "Kilometers"]], "Desdemona" -> Association[ "Mass" -> Quantity[ 1.78298522669383596`2.995678626217367*^17, "Kilograms"], "Radius" -> Quantity[32.`3., "Kilometers"]], "Juliet" -> Association[ "Mass" -> Quantity[ 5.57370171705972251`2.9956786262173543*^17, "Kilograms"], "Radius" -> Quantity[46.8`3., "Kilometers"]], "Portia" -> Association[ "Mass" -> Quantity[ 1.681100356597045339`3.9586073148417764*^18, "Kilograms"], "Radius" -> Quantity[67.6`3., "Kilometers"]], "Rosalind" -> Association[ "Mass" -> Quantity[ 2.54712175241976567`2.9956786262173587*^17, "Kilograms"], "Radius" -> Quantity[36.`2., "Kilometers"]], "Cupid" -> Association[ "Mass" -> Missing["NotAvailable"], "Radius" -> Quantity[5.`2., "Kilometers"]], "Belinda" -> Association[ "Mass" -> Quantity[ 3.56597045338767194`2.995678626217367*^17, "Kilograms"], "Radius" -> Quantity[40.3`3., "Kilometers"]], "Perdita" -> Association[ "Mass" -> Missing["NotAvailable"], "Radius" -> Quantity[10.`2., "Kilometers"]], "Puck" -> Association[ "Mass" -> Quantity[ 2.893230649366216176`3.9586073148417764*^18, "Kilograms"], "Radius" -> Quantity[81.`2., "Kilometers"]], "Mab" -> Association[ "Mass" -> Missing["NotAvailable"], "Radius" -> Quantity[5.`2., "Kilometers"]], "Miranda" -> Association[ "Mass" -> Quantity[ 6.5925504180276287794`1.9995659225206872*^19, "Kilograms"], "Radius" -> Quantity[235.8`4., "Kilometers"]], "Ariel" -> Association[ "Mass" -> Quantity[ 1.352971142608851997243`2.9956786262173587*^21, "Kilograms"], "Radius" -> Quantity[578.9`4., "Kilometers"]], "Umbriel" -> Association[ "Mass" -> Quantity[ 1.171676006113092205807`2.9956786262173587*^21, "Kilograms"], "Radius" -> Quantity[584.7`4., "Kilometers"]], "Titania" -> Association[ "Mass" -> Quantity[ 3.525516166731593299572`3.9586073148417764*^21, "Kilograms"], "Radius" -> Quantity[788.9`4., "Kilometers"]], "Oberon" -> Association[ "Mass" -> Quantity[ 3.013095202421263971712`3.9586073148417764*^21, "Kilograms"], "Radius" -> Quantity[761.4`4., "Kilometers"]], "Francisco" -> Association[ "Mass" -> Missing["NotAvailable"], "Radius" -> Quantity[11.`2., "Kilometers"]], "Caliban" -> Association[ "Mass" -> Quantity[ 7.34170387462167751`1.9995659225206872*^17, "Kilograms"], "Radius" -> Quantity[49.`2., "Kilometers"]], "Stephano" -> Association[ "Mass" -> Quantity[ 5.99322765275239`0.9999565727231373*^15, "Kilograms"], "Radius" -> Quantity[10.`2., "Kilometers"]], "Trinculo" -> Association[ "Mass" -> Quantity[ 7.49153456594048`0.9999565727231373*^14, "Kilograms"], "Radius" -> Quantity[5.`1., "Kilometers"]], "Sycorax" -> Association[ "Mass" -> Quantity[ 5.378921818345269844`2.9956786262173627*^18, "Kilograms"], "Radius" -> Quantity[95.`2., "Kilometers"]], "Margaret" -> Association[ "Mass" -> Missing["NotAvailable"], "Radius" -> Quantity[10.`2., "Kilometers"]], "Prospero" -> Association[ "Mass" -> Quantity[ 2.0976296784633363`1.9995659225206872*^16, "Kilograms"], "Radius" -> Quantity[15.`2., "Kilometers"]], "Setebos" -> Association[ "Mass" -> Quantity[ 2.0976296784633363`1.9995659225206872*^16, "Kilograms"], "Radius" -> Quantity[15.`2., "Kilometers"]], "Ferdinand" -> Association[ "Mass" -> Missing["NotAvailable"], "Radius" -> Quantity[10.`2., "Kilometers"]]]], "Neptune" -> Association[ "Radius" -> Quantity[24552.5`5., "Kilometers"], "Moons" -> Association[ "Naiad" -> Association[ "Mass" -> Quantity[ 1.94779898714452669`1.9995659225206872*^17, "Kilograms"], "Radius" -> Quantity[33.`2., "Kilometers"]], "Thalassa" -> Association[ "Mass" -> Quantity[ 3.74576728297024363`1.9995659225206872*^17, "Kilograms"], "Radius" -> Quantity[41.`2., "Kilometers"]], "Despina" -> Association[ "Mass" -> Quantity[ 2.09762967846333643`1.9995659225206872*^18, "Kilograms"], "Radius" -> Quantity[75.`2., "Kilometers"]], "Galatea" -> Association[ "Mass" -> Quantity[ 3.745767282970243625`1.9995659225206872*^18, "Kilograms"], "Radius" -> Quantity[88.`2., "Kilometers"]], "Larissa" -> Association[ "Mass" -> Quantity[ 4.944412813520721585`1.999565922520683*^18, "Kilograms"], "Radius" -> Quantity[97.`2., "Kilometers"]], "Proteus" -> Association[ "Mass" -> Quantity[ 5.0343112283120074311`2.995678626217367*^19, "Kilograms"], "Radius" -> Quantity[210.`3., "Kilometers"]], "Triton" -> Association[ "Mass" -> Quantity[ 2.139432441341284348686`4.6989700043360205*^22, "Kilograms"], "Radius" -> Quantity[1353.4`5., "Kilometers"]], "Nereid" -> Association[ "Mass" -> Quantity[ 3.0865122411674807466`2.9956786262173587*^19, "Kilograms"], "Radius" -> Quantity[170.`3., "Kilometers"]], "Halimede" -> Association[ "Mass" -> Missing["NotAvailable"], "Radius" -> Quantity[30.`2., "Kilometers"]], "Sao" -> Association[ "Mass" -> Missing["NotAvailable"], "Radius" -> Quantity[20.`2., "Kilometers"]], "Laomedeia" -> Association[ "Mass" -> Missing["NotAvailable"], "Radius" -> Quantity[20.`2., "Kilometers"]], "Psamathe" -> Association[ "Mass" -> Missing["NotAvailable"], "Radius" -> Quantity[20.`2., "Kilometers"]], "Neso" -> Association[ "Mass" -> Missing["NotAvailable"], "Radius" -> Quantity[30.`2., "Kilometers"]]]]], TypeSystem`Assoc[ TypeSystem`Atom[String], TypeSystem`Struct[{"Radius", "Moons"}, { TypeSystem`Atom[ Quantity[1, "Kilometers"]], TypeSystem`Assoc[ TypeSystem`Atom[String], TypeSystem`Struct[{"Mass", "Radius"}, { TypeSystem`Atom[ Quantity[1, "Kilograms"]], TypeSystem`Atom[ Quantity[1, "Kilometers"]]}], TypeSystem`AnyLength]}], 8], Association["ID" -> 165317787556689]], DatasetDisplayPanel -> {"Earth"}] |
When a Dataset has scrollbars, you can specify the initial scroll positions with ScrollPosition, giving the initial vertical and horizontal positions:
✕
Dataset[Dataset[ Association[ "Mercury" -> Association[ "Radius" -> Quantity[2439.7`5., "Kilometers"], "Moons" -> Association[]], "Venus" -> Association[ "Radius" -> Quantity[6051.85`5., "Kilometers"], "Moons" -> Association[]], "Earth" -> Association[ "Radius" -> Quantity[ 6367.4446571000000000001`8.299868708313456, "Kilometers"], "Moons" -> Association[ "Moon" -> Association[ "Mass" -> Quantity[ 7.3459006322855173653772`4.995678626217362*^22, "Kilograms"], "Radius" -> Quantity[1737.5`5., "Kilometers"]]]], "Mars" -> Association[ "Radius" -> Quantity[3385.595`4.298042852900571, "Kilometers"], "Moons" -> Association[ "Phobos" -> Association[ "Mass" -> Quantity[ 1.0724880884600402`3.9586073148417724*^16, "Kilograms"], "Radius" -> Quantity[11.1`3., "Kilometers"]], "Deimos" -> Association[ "Mass" -> Quantity[ 1.468340774924336`1.9995659225206786*^15, "Kilograms"], "Radius" -> Quantity[6.2`2., "Kilometers"]]]], "Jupiter" -> Association[ "Radius" -> Quantity[69173.`5., "Kilometers"], "Moons" -> Association[ "Metis" -> Association[ "Mass" -> Quantity[ 1.19864553055047796`0.9999565727231415*^17, "Kilograms"], "Radius" -> Quantity[21.5`3., "Kilometers"]], "Adrastea" -> Association[ "Mass" -> Quantity[ 7.491534565940487`0.9999565727231415*^15, "Kilograms"], "Radius" -> Quantity[8.2`2., "Kilometers"]], "Amalthea" -> Association[ "Mass" -> Quantity[ 2.067663540199574478`2.995678626217367*^18, "Kilograms"], "Radius" -> Quantity[83.45`4., "Kilometers"]], "Thebe" -> Association[ "Mass" -> Quantity[ 1.49830691318809745`1.9995659225206872*^18, "Kilograms"], "Radius" -> Quantity[49.3`3., "Kilometers"]], "Io" -> Association[ "Mass" -> Quantity[ 8.9297833448203530011087`4.995678626217362*^22, "Kilograms"], "Radius" -> Quantity[1821.6`5., "Kilometers"]], "Europa" -> Association[ "Mass" -> Quantity[ 4.7986859848371340385365`4.995678626217362*^22, "Kilograms"], "Radius" -> Quantity[1560.8`5., "Kilometers"]], "Ganymede" -> Association[ "Mass" -> Quantity[ 1.48150100386563183602529`4.995678626217362*^23, "Kilograms"], "Radius" -> Quantity[2631.2`5., "Kilometers"]], "Callisto" -> Association[ "Mass" -> Quantity[ 1.07567783404752629528633`4.995678626217362*^23, "Kilograms"], "Radius" -> Quantity[2410.3`5., "Kilometers"]], "Themisto" -> Association[ "Mass" -> Quantity[ 6.89221180066526`1.9995659225206872*^14, "Kilograms"], "Radius" -> Quantity[4.`2., "Kilometers"]], "Leda" -> Association[ "Mass" -> Quantity[ 1.0937640466273112`1.9995659225206872*^16, "Kilograms"], "Radius" -> Quantity[10.`2., "Kilometers"]], "Himalia" -> Association[ "Mass" -> Quantity[ 6.742381109346438525`1.999565922520683*^18, "Kilograms"], "Radius" -> Quantity[85.`2., "Kilometers"]], "Lysithea" -> Association[ "Mass" -> Quantity[ 6.2928890353900092`1.999565922520683*^16, "Kilograms"], "Radius" -> Quantity[18.`2., "Kilometers"]], "Elara" -> Association[ "Mass" -> Quantity[ 8.6901800964909652`1.9995659225206872*^17, "Kilograms"], "Radius" -> Quantity[43.`2., "Kilometers"]], "S/2000 J11" -> Association[ "Mass" -> Missing["NotAvailable"], "Radius" -> Quantity[2.`2., "Kilometers"]], "S/2003 J12" -> Association[ "Mass" -> Missing["NotAvailable"], "Radius" -> Quantity[1.`2., "Kilometers"]], "Carpo" -> Association[ "Mass" -> Missing["NotAvailable"], "Radius" -> Quantity[3.`2., "Kilometers"]], "Euporie" -> Association[ "Mass" -> Quantity[ 1.4983069131881`0.9999565727231415*^13, "Kilograms"], "Radius" -> Quantity[1.`2., "Kilometers"]], "S/2003 J3" -> Association[ "Mass" -> Missing["NotAvailable"], "Radius" -> Quantity[2.`2., "Kilometers"]], "S/2003 J18" -> Association[ "Mass" -> Missing["NotAvailable"], "Radius" -> Quantity[2.`2., "Kilometers"]], "Orthosie" -> Association[ "Mass" -> Quantity[ 1.4983069131881`0.9999565727231415*^13, "Kilograms"], "Radius" -> Quantity[1.`2., "Kilometers"]], "Euanthe" -> Association[ "Mass" -> Quantity[ 4.4949207395643`0.9999565727231415*^13, "Kilograms"], "Radius" -> Quantity[1.5`2., "Kilometers"]], "Harpalyke" -> Association[ "Mass" -> Quantity[ 1.19864553055047`0.9999565727231415*^14, "Kilograms"], "Radius" -> Quantity[2.2`2., "Kilometers"]], "Praxidike" -> Association[ "Mass" -> Quantity[ 4.34509004824548`1.9995659225206872*^14, "Kilograms"], "Radius" -> Quantity[3.4`2., "Kilometers"]], "Thyone" -> Association[ "Mass" -> Quantity[ 8.9898414791287`0.9999565727231415*^13, "Kilograms"], "Radius" -> Quantity[2.`2., "Kilometers"]], "S/2003 J16" -> Association[ "Mass" -> Missing["NotAvailable"], "Radius" -> Quantity[2.`2., "Kilometers"]], "Iocaste" -> Association[ "Mass" -> Quantity[ 1.94779898714453`1.9995659225206872*^14, "Kilograms"], "Radius" -> Quantity[2.6`2., "Kilometers"]], "Mneme" -> Association[ "Mass" -> Missing["NotAvailable"], "Radius" -> Quantity[2.`2., "Kilometers"]], "Hermippe" -> Association[ "Mass" -> Quantity[ 8.9898414791287`0.9999565727231415*^13, "Kilograms"], "Radius" -> Quantity[2.`2., "Kilometers"]], "Thelxinoe" -> Association[ "Mass" -> Missing["NotAvailable"], "Radius" -> Quantity[2.`2., "Kilometers"]], "Helike" -> Association[ "Mass" -> Missing["NotAvailable"], "Radius" -> Quantity[4.`2., "Kilometers"]], "Ananke" -> Association[ "Mass" -> Quantity[ 2.9966138263761948`1.9995659225206872*^16, "Kilograms"], "Radius" -> Quantity[14.`2., "Kilometers"]], "S/2003 J15" -> Association[ "Mass" -> Missing["NotAvailable"], "Radius" -> Quantity[2.`2., "Kilometers"]], "Eurydome" -> Association[ "Mass" -> Quantity[ 4.4949207395643`0.9999565727231415*^13, "Kilograms"], "Radius" -> Quantity[1.5`2., "Kilometers"]], "Arche" -> Association[ "Mass" -> Missing["NotAvailable"], "Radius" -> Quantity[1.5`2., "Kilometers"]], "Herse" -> Association[ "Mass" -> Missing["NotAvailable"], "Radius" -> Quantity[2.`2., "Kilometers"]], "Pasithee" -> Association[ "Mass" -> Quantity[ 1.4983069131881`0.9999565727231415*^13, "Kilograms"], "Radius" -> Quantity[1.`2., "Kilometers"]], "S/2003 J10" -> Association[ "Mass" -> Missing["NotAvailable"], "Radius" -> Quantity[2.`2., "Kilometers"]], "Chaldene" -> Association[ "Mass" -> Quantity[ 7.4915345659396`0.9999565727231415*^13, "Kilograms"], "Radius" -> Quantity[1.9`2., "Kilometers"]], "Isonoe" -> Association[ "Mass" -> Quantity[ 7.4915345659396`0.9999565727231415*^13, "Kilograms"], "Radius" -> Quantity[1.9`2., "Kilometers"]], "Erinome" -> Association[ "Mass" -> Quantity[ 4.4949207395643`0.9999565727231415*^13, "Kilograms"], "Radius" -> Quantity[1.6`2., "Kilometers"]], "Kale" -> Association[ "Mass" -> Quantity[ 1.4983069131881`0.9999565727231415*^13, "Kilograms"], "Radius" -> Quantity[1.`2., "Kilometers"]], "Aitne" -> Association[ "Mass" -> Quantity[ 4.4949207395643`0.9999565727231415*^13, "Kilograms"], "Radius" -> Quantity[1.5`2., "Kilometers"]], "Taygete" -> Association[ "Mass" -> Quantity[ 1.6481376045069`1.9995659225206872*^14, "Kilograms"], "Radius" -> Quantity[2.5`2., "Kilometers"]], "S/2003 J9" -> Association[ "Mass" -> Missing["NotAvailable"], "Radius" -> Quantity[1.`2., "Kilometers"]], "Carme" -> Association[ "Mass" -> Quantity[ 1.31851008360552575`1.9995659225206786*^17, "Kilograms"], "Radius" -> Quantity[23.`2., "Kilometers"]], "Sponde" -> Association[ "Mass" -> Quantity[ 1.4983069131881`0.9999565727231415*^13, "Kilograms"], "Radius" -> Quantity[1.`2., "Kilometers"]], "Megaclite" -> Association[ "Mass" -> Quantity[ 2.09762967846334`1.9995659225206872*^14, "Kilograms"], "Radius" -> Quantity[2.7`2., "Kilometers"]], "S/2003 J5" -> Association[ "Mass" -> Missing["NotAvailable"], "Radius" -> Quantity[4.`2., "Kilometers"]], "S/2003 J19" -> Association[ "Mass" -> Missing["NotAvailable"], "Radius" -> Quantity[2.`2., "Kilometers"]], "S/2003 J23" -> Association[ "Mass" -> Missing["NotAvailable"], "Radius" -> Quantity[2.`2., "Kilometers"]], "Kalyke" -> Association[ "Mass" -> Quantity[ 1.94779898714453`1.9995659225206872*^14, "Kilograms"], "Radius" -> Quantity[2.6`2., "Kilometers"]], "Kore" -> Association[ "Mass" -> Missing["NotAvailable"], "Radius" -> Quantity[2.`2., "Kilometers"]], "Pasiphae" -> Association[ "Mass" -> Quantity[ 2.9966138263761949`1.9995659225206786*^17, "Kilograms"], "Radius" -> Quantity[30.`2., "Kilometers"]], "Eukelade" -> Association[ "Mass" -> Missing["NotAvailable"], "Radius" -> Quantity[4.`2., "Kilometers"]], "S/2003 J4" -> Association[ "Mass" -> Missing["NotAvailable"], "Radius" -> Quantity[2.`2., "Kilometers"]], "Sinope" -> Association[ "Mass" -> Quantity[ 7.4915345659404873`1.9995659225206786*^16, "Kilograms"], "Radius" -> Quantity[19.`2., "Kilometers"]], "Hegemone" -> Association[ "Mass" -> Missing["NotAvailable"], "Radius" -> Quantity[3.`2., "Kilometers"]], "Aoede" -> Association[ "Mass" -> Missing["NotAvailable"], "Radius" -> Quantity[4.`2., "Kilometers"]], "Kallichore" -> Association[ "Mass" -> Missing["NotAvailable"], "Radius" -> Quantity[2.`2., "Kilometers"]], "Autonoe" -> Association[ "Mass" -> Quantity[ 8.9898414791287`0.9999565727231415*^13, "Kilograms"], "Radius" -> Quantity[2.`2., "Kilometers"]], "Callirrhoe" -> Association[ "Mass" -> Quantity[ 8.69018009649097`1.9995659225206872*^14, "Kilograms"], "Radius" -> Quantity[4.3`2., "Kilometers"]], "Cyllene" -> Association[ "Mass" -> Missing["NotAvailable"], "Radius" -> Quantity[2.`2., "Kilometers"]], "S/2003 J2" -> Association[ "Mass" -> Missing["NotAvailable"], "Radius" -> Quantity[2.`2., "Kilometers"]]]], "Saturn" -> Association[ "Radius" -> Quantity[57316.`5., "Kilometers"], "Moons" -> Association[ "Tarqeq" -> Association[ "Mass" -> Missing["NotAvailable"], "Radius" -> Quantity[3.5`2., "Kilometers"]], "Pan" -> Association[ "Mass" -> Quantity[ 4.944412813520729`1.9995659225206872*^15, "Kilograms"], "Radius" -> Quantity[12.8`3., "Kilometers"]], "Daphnis" -> Association[ "Mass" -> Missing["NotAvailable"], "Radius" -> Quantity[3.9`2., "Kilometers"]], "Atlas" -> Association[ "Mass" -> Quantity[ 2.097629678463337`1.9995659225206786*^15, "Kilograms"], "Radius" -> Quantity[10.`2., "Kilometers"]], "Prometheus" -> Association[ "Mass" -> Quantity[ 1.86689041383236942`3.9586073148417764*^17, "Kilograms"], "Radius" -> Quantity[46.8`3., "Kilometers"]], "Pandora" -> Association[ "Mass" -> Quantity[ 1.49081537862215657`2.9956786262173587*^17, "Kilograms"], "Radius" -> Quantity[40.6`3., "Kilometers"]], "Epimetheus" -> Association[ "Mass" -> Quantity[ 5.25905726529022205`2.9956786262173543*^17, "Kilograms"], "Radius" -> Quantity[58.3`3., "Kilometers"]], "Janus" -> Association[ "Mass" -> Quantity[ 1.896856552096131371`3.9586073148417764*^18, "Kilograms"], "Radius" -> Quantity[90.4`3., "Kilometers"]], "Aegaeon" -> Association[ "Mass" -> Missing["NotAvailable"], "Radius" -> Quantity[0.25`2., "Kilometers"]], "Mimas" -> Association[ "Mass" -> Quantity[ 3.7907164903658865482`3.9586073148417764*^19, "Kilograms"], "Radius" -> Quantity[198.8`4., "Kilometers"]], "Methone" -> Association[ "Mass" -> Missing["NotAvailable"], "Radius" -> Quantity[1.6`2., "Kilometers"]], "Anthe" -> Association[ "Mass" -> Quantity[5.`1.*^12, "Kilograms"], "Radius" -> Quantity[1.`1., "Kilometers"]], "Pallene" -> Association[ "Mass" -> Missing["NotAvailable"], "Radius" -> Quantity[2.6`2., "Kilometers"]], "Enceladus" -> Association[ "Mass" -> Quantity[ 1.08027928440861826137`3.9586073148417764*^20, "Kilograms"], "Radius" -> Quantity[252.3`4., "Kilometers"]], "Tethys" -> Association[ "Mass" -> Quantity[ 6.17452278924814959099`4.6989700043360205*^20, "Kilograms"], "Radius" -> Quantity[536.3`4., "Kilometers"]], "Calypso" -> Association[ "Mass" -> Quantity[ 3.595936591651433`1.9995659225206872*^15, "Kilograms"], "Radius" -> Quantity[9.5`2., "Kilometers"]], "Telesto" -> Association[ "Mass" -> Quantity[ 7.191873183302868`1.9995659225206872*^15, "Kilograms"], "Radius" -> Quantity[12.`2., "Kilometers"]], "Polydeuces" -> Association[ "Mass" -> Missing["NotAvailable"], "Radius" -> Quantity[1.2`2., "Kilometers"]], "Dione" -> Association[ "Mass" -> Quantity[ 1.095457133439213688532`4.6989700043360205*^21, "Kilograms"], "Radius" -> Quantity[562.5`4., "Kilometers"]], "Helene" -> Association[ "Mass" -> Quantity[ 2.5471217524197656`1.9995659225206872*^16, "Kilograms"], "Radius" -> Quantity[16.`2., "Kilometers"]], "Rhea" -> Association[ "Mass" -> Quantity[ 2.308441461148901741032`4.6989700043360205*^21, "Kilograms"], "Radius" -> Quantity[764.5`4., "Kilometers"]], "Titan" -> Association[ "Mass" -> Quantity[ 1.34520841449162446435527`4.958607314841778*^23, "Kilograms"], "Radius" -> Quantity[2575.5`5., "Kilometers"]], "Hyperion" -> Association[ "Mass" -> Quantity[ 5.543735578795960565`1.9995659225206872*^18, "Kilograms"], "Radius" -> Quantity[133.`4., "Kilometers"]], "Iapetus" -> Association[ "Mass" -> Quantity[ 1.805459830391657427108`4.6989700043360205*^21, "Kilograms"], "Radius" -> Quantity[734.5`4., "Kilometers"]], "Kiviuq" -> Association[ "Mass" -> Quantity[ 3.296275209013815`1.9995659225206872*^15, "Kilograms"], "Radius" -> Quantity[8.`1., "Kilometers"]], "Ijiraq" -> Association[ "Mass" -> Quantity[ 1.198645530550478`1.9995659225206872*^15, "Kilograms"], "Radius" -> Quantity[6.`1., "Kilometers"]], "Phoebe" -> Association[ "Mass" -> Quantity[ 8.287135536843366995`3.9586073148417764*^18, "Kilograms"], "Radius" -> Quantity[106.6`4., "Kilometers"]], "Paaliaq" -> Association[ "Mass" -> Quantity[ 8.240688022534537`1.999565922520683*^15, "Kilograms"], "Radius" -> Quantity[11.`3., "Kilometers"]], "Skathi" -> Association[ "Mass" -> Quantity[ 3.146444517695`1.9995659225206786*^14, "Kilograms"], "Radius" -> Quantity[4.`1., "Kilometers"]], "Albiorix" -> Association[ "Mass" -> Quantity[ 2.0976296784633363`1.9995659225206872*^16, "Kilograms"], "Radius" -> Quantity[16.`2., "Kilometers"]], "S/2007 S2" -> Association[ "Mass" -> Quantity[1.5`2.*^14, "Kilograms"], "Radius" -> Quantity[3.`1., "Kilometers"]], "Bebhionn" -> Association[ "Mass" -> Missing["NotAvailable"], "Radius" -> Quantity[3.`2., "Kilometers"]], "Erriapo" -> Association[ "Mass" -> Quantity[ 7.64136525725929`1.9995659225206914*^14, "Kilograms"], "Radius" -> Quantity[5.`1., "Kilometers"]], "Siarnaq" -> Association[ "Mass" -> Quantity[ 3.8955979742890535`1.999565922520683*^16, "Kilograms"], "Radius" -> Quantity[20.`2., "Kilometers"]], "Skoll" -> Association[ "Mass" -> Missing["NotAvailable"], "Radius" -> Quantity[3.`2., "Kilometers"]], "Tarvos" -> Association[ "Mass" -> Quantity[ 2.696952443738576`1.9995659225206786*^15, "Kilograms"], "Radius" -> Quantity[7.5`2., "Kilometers"]], "Greip" -> Association[ "Mass" -> Missing["NotAvailable"], "Radius" -> Quantity[3.`2., "Kilometers"]], "S/2004 S13" -> Association[ "Mass" -> Missing["NotAvailable"], "Radius" -> Quantity[3.`2., "Kilometers"]], "Hyrrokkin" -> Association[ "Mass" -> Missing["NotAvailable"], "Radius" -> Quantity[4.`2., "Kilometers"]], "Mundilfari" -> Association[ "Mass" -> Quantity[ 2.09762967846334`1.9995659225206872*^14, "Kilograms"], "Radius" -> Quantity[3.5`2., "Kilometers"]], "S/2006 S1" -> Association[ "Mass" -> Missing["NotAvailable"], "Radius" -> Quantity[3.`2., "Kilometers"]], "Jarnsaxa" -> Association[ "Mass" -> Missing["NotAvailable"], "Radius" -> Quantity[3.`2., "Kilometers"]], "Narvi" -> Association[ "Mass" -> Quantity[ 3.44610590033262`1.9995659225206872*^14, "Kilograms"], "Radius" -> Quantity[3.5`2., "Kilometers"]], "Bergelmir" -> Association[ "Mass" -> Missing["NotAvailable"], "Radius" -> Quantity[3.`2., "Kilometers"]], "S/2004 S17" -> Association[ "Mass" -> Missing["NotAvailable"], "Radius" -> Quantity[2.`2., "Kilometers"]], "Suttungr" -> Association[ "Mass" -> Quantity[ 2.09762967846334`1.9995659225206872*^14, "Kilograms"], "Radius" -> Quantity[3.5`2., "Kilometers"]], "Hati" -> Association[ "Mass" -> Missing["NotAvailable"], "Radius" -> Quantity[3.`2., "Kilometers"]], "S/2004 S12" -> Association[ "Mass" -> Missing["NotAvailable"], "Radius" -> Quantity[3.`2., "Kilometers"]], "Bestla" -> Association[ "Mass" -> Missing["NotAvailable"], "Radius" -> Quantity[3.`2., "Kilometers"]], "Farbauti" -> Association[ "Mass" -> Missing["NotAvailable"], "Radius" -> Quantity[3.`2., "Kilometers"]], "Thrymr" -> Association[ "Mass" -> Quantity[ 2.09762967846334`1.9995659225206872*^14, "Kilograms"], "Radius" -> Quantity[3.5`2., "Kilometers"]], "S/2007 S3" -> Association[ "Mass" -> Missing["NotAvailable"], "Radius" -> Quantity[2.5`2., "Kilometers"]], "Aegir" -> Association[ "Mass" -> Missing["NotAvailable"], "Radius" -> Quantity[3.`2., "Kilometers"]], "S/2004 S7" -> Association[ "Mass" -> Missing["NotAvailable"], "Radius" -> Quantity[3.`2., "Kilometers"]], "S/2006 S3" -> Association[ "Mass" -> Missing["NotAvailable"], "Radius" -> Quantity[3.`2., "Kilometers"]], "Kari" -> Association[ "Mass" -> Missing["NotAvailable"], "Radius" -> Quantity[3.`2., "Kilometers"]], "Fenrir" -> Association[ "Mass" -> Missing["NotAvailable"], "Radius" -> Quantity[2.`2., "Kilometers"]], "Surtur" -> Association[ "Mass" -> Missing["NotAvailable"], "Radius" -> Quantity[3.`2., "Kilometers"]], "Ymir" -> Association[ "Mass" -> Quantity[ 4.944412813520729`1.9995659225206872*^15, "Kilograms"], "Radius" -> Quantity[9.`1., "Kilometers"]], "Loge" -> Association[ "Mass" -> Missing["NotAvailable"], "Radius" -> Quantity[3.`2., "Kilometers"]], "Fornjot" -> Association[ "Mass" -> Missing["NotAvailable"], "Radius" -> Quantity[3.`2., "Kilometers"]]]], "Uranus" -> Association[ "Radius" -> Quantity[25266.`5., "Kilometers"], "Moons" -> Association[ "Cordelia" -> Association[ "Mass" -> Quantity[ 4.4949207395642923`1.9995659225206872*^16, "Kilograms"], "Radius" -> Quantity[20.1`3., "Kilometers"]], "Ophelia" -> Association[ "Mass" -> Quantity[ 5.3939048874771508`1.9995659225206872*^16, "Kilograms"], "Radius" -> Quantity[21.4`3., "Kilometers"]], "Bianca" -> Association[ "Mass" -> Quantity[ 9.2895028617662042`1.9995659225206872*^16, "Kilograms"], "Radius" -> Quantity[25.7`3., "Kilometers"]], "Cressida" -> Association[ "Mass" -> Quantity[ 3.43112283120074311`2.9956786262173587*^17, "Kilograms"], "Radius" -> Quantity[39.8`3., "Kilometers"]], "Desdemona" -> Association[ "Mass" -> Quantity[ 1.78298522669383596`2.995678626217367*^17, "Kilograms"], "Radius" -> Quantity[32.`3., "Kilometers"]], "Juliet" -> Association[ "Mass" -> Quantity[ 5.57370171705972251`2.9956786262173543*^17, "Kilograms"], "Radius" -> Quantity[46.8`3., "Kilometers"]], "Portia" -> Association[ "Mass" -> Quantity[ 1.681100356597045339`3.9586073148417764*^18, "Kilograms"], "Radius" -> Quantity[67.6`3., "Kilometers"]], "Rosalind" -> Association[ "Mass" -> Quantity[ 2.54712175241976567`2.9956786262173587*^17, "Kilograms"], "Radius" -> Quantity[36.`2., "Kilometers"]], "Cupid" -> Association[ "Mass" -> Missing["NotAvailable"], "Radius" -> Quantity[5.`2., "Kilometers"]], "Belinda" -> Association[ "Mass" -> Quantity[ 3.56597045338767194`2.995678626217367*^17, "Kilograms"], "Radius" -> Quantity[40.3`3., "Kilometers"]], "Perdita" -> Association[ "Mass" -> Missing["NotAvailable"], "Radius" -> Quantity[10.`2., "Kilometers"]], "Puck" -> Association[ "Mass" -> Quantity[ 2.893230649366216176`3.9586073148417764*^18, "Kilograms"], "Radius" -> Quantity[81.`2., "Kilometers"]], "Mab" -> Association[ "Mass" -> Missing["NotAvailable"], "Radius" -> Quantity[5.`2., "Kilometers"]], "Miranda" -> Association[ "Mass" -> Quantity[ 6.5925504180276287794`1.9995659225206872*^19, "Kilograms"], "Radius" -> Quantity[235.8`4., "Kilometers"]], "Ariel" -> Association[ "Mass" -> Quantity[ 1.352971142608851997243`2.9956786262173587*^21, "Kilograms"], "Radius" -> Quantity[578.9`4., "Kilometers"]], "Umbriel" -> Association[ "Mass" -> Quantity[ 1.171676006113092205807`2.9956786262173587*^21, "Kilograms"], "Radius" -> Quantity[584.7`4., "Kilometers"]], "Titania" -> Association[ "Mass" -> Quantity[ 3.525516166731593299572`3.9586073148417764*^21, "Kilograms"], "Radius" -> Quantity[788.9`4., "Kilometers"]], "Oberon" -> Association[ "Mass" -> Quantity[ 3.013095202421263971712`3.9586073148417764*^21, "Kilograms"], "Radius" -> Quantity[761.4`4., "Kilometers"]], "Francisco" -> Association[ "Mass" -> Missing["NotAvailable"], "Radius" -> Quantity[11.`2., "Kilometers"]], "Caliban" -> Association[ "Mass" -> Quantity[ 7.34170387462167751`1.9995659225206872*^17, "Kilograms"], "Radius" -> Quantity[49.`2., "Kilometers"]], "Stephano" -> Association[ "Mass" -> Quantity[ 5.99322765275239`0.9999565727231373*^15, "Kilograms"], "Radius" -> Quantity[10.`2., "Kilometers"]], "Trinculo" -> Association[ "Mass" -> Quantity[ 7.49153456594048`0.9999565727231373*^14, "Kilograms"], "Radius" -> Quantity[5.`1., "Kilometers"]], "Sycorax" -> Association[ "Mass" -> Quantity[ 5.378921818345269844`2.9956786262173627*^18, "Kilograms"], "Radius" -> Quantity[95.`2., "Kilometers"]], "Margaret" -> Association[ "Mass" -> Missing["NotAvailable"], "Radius" -> Quantity[10.`2., "Kilometers"]], "Prospero" -> Association[ "Mass" -> Quantity[ 2.0976296784633363`1.9995659225206872*^16, "Kilograms"], "Radius" -> Quantity[15.`2., "Kilometers"]], "Setebos" -> Association[ "Mass" -> Quantity[ 2.0976296784633363`1.9995659225206872*^16, "Kilograms"], "Radius" -> Quantity[15.`2., "Kilometers"]], "Ferdinand" -> Association[ "Mass" -> Missing["NotAvailable"], "Radius" -> Quantity[10.`2., "Kilometers"]]]], "Neptune" -> Association[ "Radius" -> Quantity[24552.5`5., "Kilometers"], "Moons" -> Association[ "Naiad" -> Association[ "Mass" -> Quantity[ 1.94779898714452669`1.9995659225206872*^17, "Kilograms"], "Radius" -> Quantity[33.`2., "Kilometers"]], "Thalassa" -> Association[ "Mass" -> Quantity[ 3.74576728297024363`1.9995659225206872*^17, "Kilograms"], "Radius" -> Quantity[41.`2., "Kilometers"]], "Despina" -> Association[ "Mass" -> Quantity[ 2.09762967846333643`1.9995659225206872*^18, "Kilograms"], "Radius" -> Quantity[75.`2., "Kilometers"]], "Galatea" -> Association[ "Mass" -> Quantity[ 3.745767282970243625`1.9995659225206872*^18, "Kilograms"], "Radius" -> Quantity[88.`2., "Kilometers"]], "Larissa" -> Association[ "Mass" -> Quantity[ 4.944412813520721585`1.999565922520683*^18, "Kilograms"], "Radius" -> Quantity[97.`2., "Kilometers"]], "Proteus" -> Association[ "Mass" -> Quantity[ 5.0343112283120074311`2.995678626217367*^19, "Kilograms"], "Radius" -> Quantity[210.`3., "Kilometers"]], "Triton" -> Association[ "Mass" -> Quantity[ 2.139432441341284348686`4.6989700043360205*^22, "Kilograms"], "Radius" -> Quantity[1353.4`5., "Kilometers"]], "Nereid" -> Association[ "Mass" -> Quantity[ 3.0865122411674807466`2.9956786262173587*^19, "Kilograms"], "Radius" -> Quantity[170.`3., "Kilometers"]], "Halimede" -> Association[ "Mass" -> Missing["NotAvailable"], "Radius" -> Quantity[30.`2., "Kilometers"]], "Sao" -> Association[ "Mass" -> Missing["NotAvailable"], "Radius" -> Quantity[20.`2., "Kilometers"]], "Laomedeia" -> Association[ "Mass" -> Missing["NotAvailable"], "Radius" -> Quantity[20.`2., "Kilometers"]], "Psamathe" -> Association[ "Mass" -> Missing["NotAvailable"], "Radius" -> Quantity[20.`2., "Kilometers"]], "Neso" -> Association[ "Mass" -> Missing["NotAvailable"], "Radius" -> Quantity[30.`2., "Kilometers"]]]]], TypeSystem`Assoc[ TypeSystem`Atom[String], TypeSystem`Struct[{"Radius", "Moons"}, { TypeSystem`Atom[ Quantity[1, "Kilometers"]], TypeSystem`Assoc[ TypeSystem`Atom[String], TypeSystem`Struct[{"Mass", "Radius"}, { TypeSystem`Atom[ Quantity[1, "Kilograms"]], TypeSystem`Atom[ Quantity[1, "Kilometers"]]}], TypeSystem`AnyLength]}], 8], Association["ID" -> 165317787556689]], MaxItems -> {3, 1}, ScrollPosition -> {2, 2}] |
Dataset’s styling options have a rich syntax that supports patterns, cyclic specifications and value functions. To show you how those work, I’ll take a deep dive into Background syntax. Other styling options work similarly.
To apply the same Background color to all items in a Dataset, specify a single color:
✕
Dataset[Dataset[ Association[ "Deb" -> Association[ "age" -> 62, "sex" -> "female", "children" -> Association[ "Hal" -> Association["age" -> 29, "sex" -> "male"], "Kat" -> Association["age" -> 31, "sex" -> "female"]]], "Eva" -> Association[ "age" -> 43, "sex" -> "female", "children" -> Association[]], "Bob" -> Association[ "age" -> 41, "sex" -> "male", "children" -> Association[ "Bob" -> Association["age" -> 1, "sex" -> "male"], "Bri" -> Association["age" -> 3, "sex" -> "female"], "Dan" -> Association["age" -> 6, "sex" -> "male"]]], "Ann" -> Association[ "age" -> 35, "sex" -> "female", "children" -> Association[ "Amy" -> Association["age" -> 6, "sex" -> "female"]]], "Cal" -> Association[ "age" -> 60, "sex" -> "female", "children" -> Association[]]], TypeSystem`Assoc[ TypeSystem`Atom[String], TypeSystem`Struct[{"age", "sex", "children"}, { TypeSystem`Atom[Integer], TypeSystem`Atom[ TypeSystem`Enumeration["female", "male"]], TypeSystem`Assoc[ TypeSystem`Atom[String], TypeSystem`Struct[{"age", "sex"}, { TypeSystem`Atom[Integer], TypeSystem`Atom[String]}], TypeSystem`AnyLength]}], 5], Association["ID" -> 165274837883637, MaxItems -> {All, All, All}]], Background -> Yellow] |
To specify different colors for successive levels of a Dataset, give a list:
✕
Dataset[Dataset[ Association[ "Deb" -> Association[ "age" -> 62, "sex" -> "female", "children" -> Association[ "Hal" -> Association["age" -> 29, "sex" -> "male"], "Kat" -> Association["age" -> 31, "sex" -> "female"]]], "Eva" -> Association[ "age" -> 43, "sex" -> "female", "children" -> Association[]], "Bob" -> Association[ "age" -> 41, "sex" -> "male", "children" -> Association[ "Bob" -> Association["age" -> 1, "sex" -> "male"], "Bri" -> Association["age" -> 3, "sex" -> "female"], "Dan" -> Association["age" -> 6, "sex" -> "male"]]], "Ann" -> Association[ "age" -> 35, "sex" -> "female", "children" -> Association[ "Amy" -> Association["age" -> 6, "sex" -> "female"]]], "Cal" -> Association[ "age" -> 60, "sex" -> "female", "children" -> Association[]]], TypeSystem`Assoc[ TypeSystem`Atom[String], TypeSystem`Struct[{"age", "sex", "children"}, { TypeSystem`Atom[Integer], TypeSystem`Atom[ TypeSystem`Enumeration["female", "male"]], TypeSystem`Assoc[ TypeSystem`Atom[String], TypeSystem`Struct[{"age", "sex"}, { TypeSystem`Atom[Integer], TypeSystem`Atom[String]}], TypeSystem`AnyLength]}], 5], Association["ID" -> 165274837883637, MaxItems -> {All, All, All}]], Background -> {Yellow, Cyan}] |
But wait, that colored everything green! That’s because the yellow rows and cyan columns blend to give green items. You can see what’s going on more clearly in the next example.
Giving a list at a given level applies the colors to successive elements. In this case, the first row is yellow, the second is cyan and the rest are the default color:
✕
Dataset[Dataset[ Association[ "Deb" -> Association[ "age" -> 62, "sex" -> "female", "children" -> Association[ "Hal" -> Association["age" -> 29, "sex" -> "male"], "Kat" -> Association["age" -> 31, "sex" -> "female"]]], "Eva" -> Association[ "age" -> 43, "sex" -> "female", "children" -> Association[]], "Bob" -> Association[ "age" -> 41, "sex" -> "male", "children" -> Association[ "Bob" -> Association["age" -> 1, "sex" -> "male"], "Bri" -> Association["age" -> 3, "sex" -> "female"], "Dan" -> Association["age" -> 6, "sex" -> "male"]]], "Ann" -> Association[ "age" -> 35, "sex" -> "female", "children" -> Association[ "Amy" -> Association["age" -> 6, "sex" -> "female"]]], "Cal" -> Association[ "age" -> 60, "sex" -> "female", "children" -> Association[]]], TypeSystem`Assoc[ TypeSystem`Atom[String], TypeSystem`Struct[{"age", "sex", "children"}, { TypeSystem`Atom[Integer], TypeSystem`Atom[ TypeSystem`Enumeration["female", "male"]], TypeSystem`Assoc[ TypeSystem`Atom[String], TypeSystem`Struct[{"age", "sex"}, { TypeSystem`Atom[Integer], TypeSystem`Atom[String]}], TypeSystem`AnyLength]}], 5], Association["ID" -> 165274837883637, MaxItems -> {All, All, All}]], Background -> {{Yellow, Cyan}}] |
If you color the columns similarly, the colors blend at their intersections. Thus the {"Eva","age"} and {"Deb","sex"} items are green, the blend of yellow and cyan:
✕
Dataset[Dataset[ Association[ "Deb" -> Association[ "age" -> 62, "sex" -> "female", "children" -> Association[ "Hal" -> Association["age" -> 29, "sex" -> "male"], "Kat" -> Association["age" -> 31, "sex" -> "female"]]], "Eva" -> Association[ "age" -> 43, "sex" -> "female", "children" -> Association[]], "Bob" -> Association[ "age" -> 41, "sex" -> "male", "children" -> Association[ "Bob" -> Association["age" -> 1, "sex" -> "male"], "Bri" -> Association["age" -> 3, "sex" -> "female"], "Dan" -> Association["age" -> 6, "sex" -> "male"]]], "Ann" -> Association[ "age" -> 35, "sex" -> "female", "children" -> Association[ "Amy" -> Association["age" -> 6, "sex" -> "female"]]], "Cal" -> Association[ "age" -> 60, "sex" -> "female", "children" -> Association[]]], TypeSystem`Assoc[ TypeSystem`Atom[String], TypeSystem`Struct[{"age", "sex", "children"}, { TypeSystem`Atom[Integer], TypeSystem`Atom[ TypeSystem`Enumeration["female", "male"]], TypeSystem`Assoc[ TypeSystem`Atom[String], TypeSystem`Struct[{"age", "sex"}, { TypeSystem`Atom[Integer], TypeSystem`Atom[String]}], TypeSystem`AnyLength]}], 5], Association["ID" -> 165274837883637, MaxItems -> {All, All, All}]], Background -> {{Yellow, Cyan}, {Yellow, Cyan}}] |
As in Grid, you can specify background colors to be used at the beginning, middle and end at a given level. This example makes the first row red, the second orange, then rows cyclically yellow and white until the last row, which is again red:
✕
Dataset[IdentityMatrix[8], Background -> {{Red, Orange, {Yellow, White}, Red}}] |
Background colors blend (as they do in Grid) in order to support this kind of styling, which makes it easier to follow long rows and columns:
✕
Dataset[IdentityMatrix[8], Background -> {{{LightBlue, White}}, {{LightGreen, White}}}] |
In options other than Background, values do not blend. Instead, later values override earlier ones. And within a Background option value, colors only blend when they are part of the same specification. In this example, the column colors override the row colors, except where the column color is None, which lets the row color show through:
✕
Dataset[IdentityMatrix[8], Background -> {{All} -> {{{LightBlue, White}}}, {All, All} -> {None, {{LightGreen, None}}}}] |
You can specify values at arbitrary levels. To use the default coloring at a given level, specify Automatic. In this example, items in the “children” column, which are at the third level of the Dataset, are colored yellow and orange, while items at higher levels have default coloring:
✕
Dataset[Dataset[ Association[ "Deb" -> Association[ "age" -> 62, "sex" -> "female", "children" -> Association[ "Hal" -> Association["age" -> 29, "sex" -> "male"], "Kat" -> Association["age" -> 31, "sex" -> "female"]]], "Eva" -> Association[ "age" -> 43, "sex" -> "female", "children" -> Association[]], "Bob" -> Association[ "age" -> 41, "sex" -> "male", "children" -> Association[ "Bob" -> Association["age" -> 1, "sex" -> "male"], "Bri" -> Association["age" -> 3, "sex" -> "female"], "Dan" -> Association["age" -> 6, "sex" -> "male"]]], "Ann" -> Association[ "age" -> 35, "sex" -> "female", "children" -> Association[ "Amy" -> Association["age" -> 6, "sex" -> "female"]]], "Cal" -> Association[ "age" -> 60, "sex" -> "female", "children" -> Association[]]], TypeSystem`Assoc[ TypeSystem`Atom[String], TypeSystem`Struct[{"age", "sex", "children"}, { TypeSystem`Atom[Integer], TypeSystem`Atom[ TypeSystem`Enumeration["female", "male"]], TypeSystem`Assoc[ TypeSystem`Atom[String], TypeSystem`Struct[{"age", "sex"}, { TypeSystem`Atom[Integer], TypeSystem`Atom[String]}], TypeSystem`AnyLength]}], 5], Association["ID" -> 165274837883637, MaxItems -> {All, All, All}]], Background -> {Automatic, Automatic, {Yellow, Orange}}] |
When you hover over a Dataset element, you’ll see its path displayed below the dataset frame. To apply a background color to that element, specify that path on the left-hand side of a rule in the Background value:
✕
Dataset[Dataset[ Association[ "Deb" -> Association[ "age" -> 62, "sex" -> "female", "children" -> Association[ "Hal" -> Association["age" -> 29, "sex" -> "male"], "Kat" -> Association["age" -> 31, "sex" -> "female"]]], "Eva" -> Association[ "age" -> 43, "sex" -> "female", "children" -> Association[]], "Bob" -> Association[ "age" -> 41, "sex" -> "male", "children" -> Association[ "Bob" -> Association["age" -> 1, "sex" -> "male"], "Bri" -> Association["age" -> 3, "sex" -> "female"], "Dan" -> Association["age" -> 6, "sex" -> "male"]]], "Ann" -> Association[ "age" -> 35, "sex" -> "female", "children" -> Association[ "Amy" -> Association["age" -> 6, "sex" -> "female"]]], "Cal" -> Association[ "age" -> 60, "sex" -> "female", "children" -> Association[]]], TypeSystem`Assoc[ TypeSystem`Atom[String], TypeSystem`Struct[{"age", "sex", "children"}, { TypeSystem`Atom[Integer], TypeSystem`Atom[ TypeSystem`Enumeration["female", "male"]], TypeSystem`Assoc[ TypeSystem`Atom[String], TypeSystem`Struct[{"age", "sex"}, { TypeSystem`Atom[Integer], TypeSystem`Atom[String]}], TypeSystem`AnyLength]}], 5], Association["ID" -> 165274837883637, MaxItems -> {All, All, All}]], Background -> {{All, "sex"} -> Cyan}] |
If you give a non-list element instead of a path on the left-hand side of a rule, the value is applied to any path that contains that element:
✕
Dataset[Dataset[ Association[ "Deb" -> Association[ "age" -> 62, "sex" -> "female", "children" -> Association[ "Hal" -> Association["age" -> 29, "sex" -> "male"], "Kat" -> Association["age" -> 31, "sex" -> "female"]]], "Eva" -> Association[ "age" -> 43, "sex" -> "female", "children" -> Association[]], "Bob" -> Association[ "age" -> 41, "sex" -> "male", "children" -> Association[ "Bob" -> Association["age" -> 1, "sex" -> "male"], "Bri" -> Association["age" -> 3, "sex" -> "female"], "Dan" -> Association["age" -> 6, "sex" -> "male"]]], "Ann" -> Association[ "age" -> 35, "sex" -> "female", "children" -> Association[ "Amy" -> Association["age" -> 6, "sex" -> "female"]]], "Cal" -> Association[ "age" -> 60, "sex" -> "female", "children" -> Association[]]], TypeSystem`Assoc[ TypeSystem`Atom[String], TypeSystem`Struct[{"age", "sex", "children"}, { TypeSystem`Atom[Integer], TypeSystem`Atom[ TypeSystem`Enumeration["female", "male"]], TypeSystem`Assoc[ TypeSystem`Atom[String], TypeSystem`Struct[{"age", "sex"}, { TypeSystem`Atom[Integer], TypeSystem`Atom[String]}], TypeSystem`AnyLength]}], 5], Association["ID" -> 165274837883637, MaxItems -> {All, All, All}]], Background -> {"sex" -> Cyan}] |
Combine level syntax and path syntax to specify a general rule and exceptions, as here where all rows are colored yellow, with the exception of the “Eva” row, which is colored cyan:
✕
Dataset[Dataset[ Association[ "Deb" -> Association[ "age" -> 62, "sex" -> "female", "children" -> Association[ "Hal" -> Association["age" -> 29, "sex" -> "male"], "Kat" -> Association["age" -> 31, "sex" -> "female"]]], "Eva" -> Association[ "age" -> 43, "sex" -> "female", "children" -> Association[]], "Bob" -> Association[ "age" -> 41, "sex" -> "male", "children" -> Association[ "Bob" -> Association["age" -> 1, "sex" -> "male"], "Bri" -> Association["age" -> 3, "sex" -> "female"], "Dan" -> Association["age" -> 6, "sex" -> "male"]]], "Ann" -> Association[ "age" -> 35, "sex" -> "female", "children" -> Association[ "Amy" -> Association["age" -> 6, "sex" -> "female"]]], "Cal" -> Association[ "age" -> 60, "sex" -> "female", "children" -> Association[]]], TypeSystem`Assoc[ TypeSystem`Atom[String], TypeSystem`Struct[{"age", "sex", "children"}, { TypeSystem`Atom[Integer], TypeSystem`Atom[ TypeSystem`Enumeration["female", "male"]], TypeSystem`Assoc[ TypeSystem`Atom[String], TypeSystem`Struct[{"age", "sex"}, { TypeSystem`Atom[Integer], TypeSystem`Atom[String]}], TypeSystem`AnyLength]}], 5], Association["ID" -> 165274837883637, MaxItems -> {All, All, All}]], Background -> {Yellow, {"Eva"} -> Cyan}] |
Element paths can contain arbitrary patterns. Here, both the “Eva” and “Ann” rows are colored cyan:
✕
Dataset[Dataset[ Association[ "Deb" -> Association[ "age" -> 62, "sex" -> "female", "children" -> Association[ "Hal" -> Association["age" -> 29, "sex" -> "male"], "Kat" -> Association["age" -> 31, "sex" -> "female"]]], "Eva" -> Association[ "age" -> 43, "sex" -> "female", "children" -> Association[]], "Bob" -> Association[ "age" -> 41, "sex" -> "male", "children" -> Association[ "Bob" -> Association["age" -> 1, "sex" -> "male"], "Bri" -> Association["age" -> 3, "sex" -> "female"], "Dan" -> Association["age" -> 6, "sex" -> "male"]]], "Ann" -> Association[ "age" -> 35, "sex" -> "female", "children" -> Association[ "Amy" -> Association["age" -> 6, "sex" -> "female"]]], "Cal" -> Association[ "age" -> 60, "sex" -> "female", "children" -> Association[]]], TypeSystem`Assoc[ TypeSystem`Atom[String], TypeSystem`Struct[{"age", "sex", "children"}, { TypeSystem`Atom[Integer], TypeSystem`Atom[ TypeSystem`Enumeration["female", "male"]], TypeSystem`Assoc[ TypeSystem`Atom[String], TypeSystem`Struct[{"age", "sex"}, { TypeSystem`Atom[Integer], TypeSystem`Atom[String]}], TypeSystem`AnyLength]}], 5], Association["ID" -> 165274837883637, MaxItems -> {All, All, All}]], Background -> {{"Eva" | "Ann"} -> Cyan}] |
Patterns can be arbitrarily complex. This colors any row cyan whose header contains a lowercase or uppercase a:
✕
Dataset[Dataset[ Association[ "Deb" -> Association[ "age" -> 62, "sex" -> "female", "children" -> Association[ "Hal" -> Association["age" -> 29, "sex" -> "male"], "Kat" -> Association["age" -> 31, "sex" -> "female"]]], "Eva" -> Association[ "age" -> 43, "sex" -> "female", "children" -> Association[]], "Bob" -> Association[ "age" -> 41, "sex" -> "male", "children" -> Association[ "Bob" -> Association["age" -> 1, "sex" -> "male"], "Bri" -> Association["age" -> 3, "sex" -> "female"], "Dan" -> Association["age" -> 6, "sex" -> "male"]]], "Ann" -> Association[ "age" -> 35, "sex" -> "female", "children" -> Association[ "Amy" -> Association["age" -> 6, "sex" -> "female"]]], "Cal" -> Association[ "age" -> 60, "sex" -> "female", "children" -> Association[]]], TypeSystem`Assoc[ TypeSystem`Atom[String], TypeSystem`Struct[{"age", "sex", "children"}, { TypeSystem`Atom[Integer], TypeSystem`Atom[ TypeSystem`Enumeration["female", "male"]], TypeSystem`Assoc[ TypeSystem`Atom[String], TypeSystem`Struct[{"age", "sex"}, { TypeSystem`Atom[Integer], TypeSystem`Atom[String]}], TypeSystem`AnyLength]}], 5], Association["ID" -> 165274837883637, MaxItems -> {All, All, All}]], Background -> {{_?(! StringFreeQ[#, "a" | "A"] &)} -> Cyan}] |
The restriction imposed by a path is applied after coloring is applied to the Dataset as a whole. Compare these examples. In the first, top-level rows are colored yellow, white and cyan:
✕
Dataset[Dataset[ Association[ "a" -> Association[ "1" -> 1, "2" -> 2, "3" -> Association[ "x" -> Association["a" -> 1, "b" -> 2, "c" -> 3]]], "b" -> Association[ "1" -> 1, "2" -> 2, "3" -> Association[ "y" -> Association["a" -> 1, "b" -> 2, "c" -> 3]]], "c" -> Association[ "1" -> 1, "2" -> 2, "3" -> Association[ "z" -> Association["a" -> 1, "b" -> 2, "c" -> 3]]]], TypeSystem`Assoc[ TypeSystem`Atom[String], TypeSystem`Struct[{"1", "2", "3"}, { TypeSystem`Atom[Integer], TypeSystem`Atom[Integer], TypeSystem`Assoc[ TypeSystem`Atom[String], TypeSystem`Assoc[ TypeSystem`Atom[ TypeSystem`Enumeration["a", "b", "c"]], TypeSystem`Atom[Integer], 3], 1]}], 3], Association["ID" -> 165433751674104]], Background -> {{Yellow, White, Cyan}}] |
Adding a path specification restricts the coloring to the “3” column:
✕
Dataset[Dataset[ Association[ "a" -> Association[ "1" -> 1, "2" -> 2, "3" -> Association[ "x" -> Association["a" -> 1, "b" -> 2, "c" -> 3]]], "b" -> Association[ "1" -> 1, "2" -> 2, "3" -> Association[ "y" -> Association["a" -> 1, "b" -> 2, "c" -> 3]]], "c" -> Association[ "1" -> 1, "2" -> 2, "3" -> Association[ "z" -> Association["a" -> 1, "b" -> 2, "c" -> 3]]]], TypeSystem`Assoc[ TypeSystem`Atom[String], TypeSystem`Struct[{"1", "2", "3"}, { TypeSystem`Atom[Integer], TypeSystem`Atom[Integer], TypeSystem`Assoc[ TypeSystem`Atom[String], TypeSystem`Assoc[ TypeSystem`Atom[ TypeSystem`Enumeration["a", "b", "c"]], TypeSystem`Atom[Integer], 3], 1]}], 3], Association["ID" -> 165433751674104]], Background -> {{All, "3"} -> {{Yellow, White, Cyan}}}] |
To apply the yellow-white-cyan coloring to the individual rows in the {All, "3"} column, specify the coloring at the level of those items, the fourth:
✕
Dataset[Dataset[ Association[ "a" -> Association[ "1" -> 1, "2" -> 2, "3" -> Association[ "x" -> Association["a" -> 1, "b" -> 2, "c" -> 3]]], "b" -> Association[ "1" -> 1, "2" -> 2, "3" -> Association[ "y" -> Association["a" -> 1, "b" -> 2, "c" -> 3]]], "c" -> Association[ "1" -> 1, "2" -> 2, "3" -> Association[ "z" -> Association["a" -> 1, "b" -> 2, "c" -> 3]]]], TypeSystem`Assoc[ TypeSystem`Atom[String], TypeSystem`Struct[{"1", "2", "3"}, { TypeSystem`Atom[Integer], TypeSystem`Atom[Integer], TypeSystem`Assoc[ TypeSystem`Atom[String], TypeSystem`Assoc[ TypeSystem`Atom[ TypeSystem`Enumeration["a", "b", "c"]], TypeSystem`Atom[Integer], 3], 1]}], 3], Association["ID" -> 165433751674104]], Background -> {{All, "3"} -> {None, None, None, {Yellow, White, Cyan}}}] |
Since nothing outside of the “3” column is colored in the previous example, the path restriction is redundant. This is another way of specifying the same thing:
✕
Dataset[Dataset[ Association[ "a" -> Association[ "1" -> 1, "2" -> 2, "3" -> Association[ "x" -> Association["a" -> 1, "b" -> 2, "c" -> 3]]], "b" -> Association[ "1" -> 1, "2" -> 2, "3" -> Association[ "y" -> Association["a" -> 1, "b" -> 2, "c" -> 3]]], "c" -> Association[ "1" -> 1, "2" -> 2, "3" -> Association[ "z" -> Association["a" -> 1, "b" -> 2, "c" -> 3]]]], TypeSystem`Assoc[ TypeSystem`Atom[String], TypeSystem`Struct[{"1", "2", "3"}, { TypeSystem`Atom[Integer], TypeSystem`Atom[Integer], TypeSystem`Assoc[ TypeSystem`Atom[String], TypeSystem`Assoc[ TypeSystem`Atom[ TypeSystem`Enumeration["a", "b", "c"]], TypeSystem`Atom[Integer], 3], 1]}], 3], Association["ID" -> 165433751674104]], Background -> {None, None, None, {Yellow, White, Cyan}}] |
The value of any specification within a styling option can be a function that returns a value. That gives you a useful way of highlighting patterns in data. Here, for example, are the first 100 positive integers, with prime numbers highlighted yellow:
✕
Dataset[Range[100], Background -> (If[PrimeQ[#], Yellow, White] &)] |
The arguments of a value function are the value of the item or header, its path within the dataset and the entire dataset itself. Having the dataset available as an argument makes it possible to do local styling based on global properties, as in this example, where rows are colored according to sex. The color of each item is obtained by looking at the value of the “sex” entry in the row that contains the item:
✕
Dataset[ExampleData[{"Dataset", "Titanic"}], Background -> (If[#3[#2[[1]], "sex"] === "male", LightBlue, LightRed] &)] |
The new Dataset options are intended to help you gain insight into your data and present it effectively. Next are some examples of how you might use them to do so.
This is a sample of the built-in Titanic dataset:
✕
Dataset[{ Association[ "class" -> "1st", "age" -> 47, "sex" -> "male", "survived" -> False], Association[ "class" -> "3rd", "age" -> 32, "sex" -> "male", "survived" -> False], Association[ "class" -> "1st", "age" -> 54, "sex" -> "female", "survived" -> True], Association[ "class" -> "2nd", "age" -> 24, "sex" -> "male", "survived" -> False], Association[ "class" -> "2nd", "age" -> 29, "sex" -> "male", "survived" -> False], Association[ "class" -> "1st", "age" -> 55, "sex" -> "male", "survived" -> False], Association[ "class" -> "1st", "age" -> 24, "sex" -> "female", "survived" -> True], Association[ "class" -> "1st", "age" -> 25, "sex" -> "male", "survived" -> True]}, TypeSystem`Vector[ TypeSystem`Struct[{"class", "age", "sex", "survived"}, { TypeSystem`Atom[ TypeSystem`Enumeration["1st", "2nd", "3rd"]], TypeSystem`Atom[Integer], TypeSystem`Atom[ TypeSystem`Enumeration["female", "male"]], TypeSystem`Atom[TypeSystem`Boolean]}], 8], Association["ID" -> 200390490496301]] |
Styling with ItemDisplayFunction and color backgrounds makes the data more immediately comprehensible:
✕
Dataset[Dataset[{ Association[ "class" -> "1st", "age" -> 47, "sex" -> "male", "survived" -> False], Association[ "class" -> "3rd", "age" -> 32, "sex" -> "male", "survived" -> False], Association[ "class" -> "1st", "age" -> 54, "sex" -> "female", "survived" -> True], Association[ "class" -> "2nd", "age" -> 24, "sex" -> "male", "survived" -> False], Association[ "class" -> "2nd", "age" -> 29, "sex" -> "male", "survived" -> False], Association[ "class" -> "1st", "age" -> 55, "sex" -> "male", "survived" -> False], Association[ "class" -> "1st", "age" -> 24, "sex" -> "female", "survived" -> True], Association[ "class" -> "1st", "age" -> 25, "sex" -> "male", "survived" -> True]}, TypeSystem`Vector[ TypeSystem`Struct[{"class", "age", "sex", "survived"}, { TypeSystem`Atom[ TypeSystem`Enumeration["1st", "2nd", "3rd"]], TypeSystem`Atom[Integer], TypeSystem`Atom[ TypeSystem`Enumeration["female", "male"]], TypeSystem`Atom[TypeSystem`Boolean]}], 8], Association["ID" -> 200390490496301]], ItemDisplayFunction -> { "class" -> (StringTake[#, 1] &), "age" -> (Tooltip[ Style[Spacer[{2 #, 20}], Background -> GrayLevel[0.75]], #] &), "sex" -> (If[# === "male", \[Mars], \[Venus]] &), "survived" -> (If[#, "\[Checkmark]", ""] &)}, Background -> (Switch[#3[[#2[[1]], "class"]], "1st", RGBColor[ 0.96, 0.96, 1.], "2nd", RGBColor[1., 0.96, 0.96], "3rd", RGBColor[1., 1., 0.96]] &)] |
Since styling options don’t affect the contents of datasets, you can use them to present numeric data in whatever formats make sense without compromising the original data:
✕
Dataset[{ Association[ "weight" -> 19.016849999999998`, "factor" -> 0.8957944119265218, "yield" -> 0.3234856056220916], Association[ "weight" -> 23.73867, "factor" -> 0.15031445199065052`, "yield" -> 0.4385543939388503], Association[ "weight" -> 5.78343, "factor" -> 0.19464352143691332`, "yield" -> 0.7559025964339601], Association[ "weight" -> 21.92067, "factor" -> 0.9981134853066305, "yield" -> 0.3376021923291914], Association[ "weight" -> 22.83753, "factor" -> 0.8753398388191531, "yield" -> 0.40843903121632064`], Association[ "weight" -> 4.81656, "factor" -> 0.5974688040388945, "yield" -> 0.6662428187598886]}, ItemDisplayFunction -> { "weight" -> (Quantity[#, "kg"] &), "factor" -> (NumberForm[#, {2, 2}] &), "yield" -> (PercentForm[#, 2] &) }, Alignment -> Center, HeaderAlignment -> Center] |
Use coloring to make it easier to pick out significant values in data. Here, negative numbers are colored red, and the largest and smallest values in each column are highlighted in blue and pink, respectively:
✕
Dataset[Table[RandomReal[{-1, 1}], {7}, {3}], ItemStyle -> (If[# < 0, Red, Black] &), Background -> (Switch[#, Max[#3[[All, #2[[2]]]]], LightBlue, Min[#3[[All, #2[[2]]]]], LightRed, _, White] &)] |
Heat maps are particularly easy using a background color function:
✕
Dataset[CompressedData[" 1:eJwBmQFm/iFib1JlAgAAAAcAAAAHAAAAqB8MAQ4o6j8gE/Hh/oW8P5xqyQTM Q+k/WBmY+u4JyD/IeeGK6uTOP5B5K15B1LA/TJKulRVa4T9Ai2CLeJbNP6wr UeQpWNk/ULck/kdq5j84hj0aHR3LP6SgvEb/9Oc/+HmAK/wP2z/I7N6y6K3P P/gK6Lb9RO8/PC7fG7xn6j/+nRu92pvtP6oogp1vf+I/zEBitWC+0j9AuJDT aqHKP25FKjg/k+I/sGLoezN9wD/gP/aVgZzTP7ylaonNyOY/tDn/Gkv/2D8s FTnuKz3qP2AK5haP0tQ/Yjtl3/z74j8ATy5IzJngPyBLC/I3osU/YIfQ3oE+ 3j+wwNnjQtTpP4AhN+0wPYg/rK2DmLJH1D80czYYE0bpP4Cferrsh4g/0Lmu IKwK2T/sdOE7p17dPzgmju22Xd0/cB9pSNFH5z+2bHmnUUzpP+BKX5lm2dA/ fKSDbHkK0z8ga+3Vo2jcP6AD84+yUaw/nPF1oRDS3j8ESZVCIKDnP2CxYKxH /Mk/4ueRPcVu4T/3xctC "], Background -> (Hue[1, #] &)] |
For a more compact presentation, hide the data behind Tooltip. Hovering over an item shows its value:
✕
Dataset[CompressedData[" 1:eJwBmQFm/iFib1JlAgAAAAcAAAAHAAAAqB8MAQ4o6j8gE/Hh/oW8P5xqyQTM Q+k/WBmY+u4JyD/IeeGK6uTOP5B5K15B1LA/TJKulRVa4T9Ai2CLeJbNP6wr UeQpWNk/ULck/kdq5j84hj0aHR3LP6SgvEb/9Oc/+HmAK/wP2z/I7N6y6K3P P/gK6Lb9RO8/PC7fG7xn6j/+nRu92pvtP6oogp1vf+I/zEBitWC+0j9AuJDT aqHKP25FKjg/k+I/sGLoezN9wD/gP/aVgZzTP7ylaonNyOY/tDn/Gkv/2D8s FTnuKz3qP2AK5haP0tQ/Yjtl3/z74j8ATy5IzJngPyBLC/I3osU/YIfQ3oE+ 3j+wwNnjQtTpP4AhN+0wPYg/rK2DmLJH1D80czYYE0bpP4Cferrsh4g/0Lmu IKwK2T/sdOE7p17dPzgmju22Xd0/cB9pSNFH5z+2bHmnUUzpP+BKX5lm2dA/ fKSDbHkK0z8ga+3Vo2jcP6AD84+yUaw/nPF1oRDS3j8ESZVCIKDnP2CxYKxH /Mk/4ueRPcVu4T/3xctC "], ItemDisplayFunction -> {Tooltip[" ", #] &}, Background -> (Hue[1, #] &), ItemSize -> 2] |
Version 12.1 gives Dataset a big boost in functionality, but we’re not done yet. There’s more to come in future versions. If you have specific requests, leave me a note in the comments section.
Get full access to the latest Wolfram Language functionality with a Mathematica 12.1 or Wolfram|One trial. |
Sudoku is a popular game that pushes the player’s analytical, mathematical and mental abilities. Solving sudoku problems has long been discussed on Wolfram Community, and there has been some fantastic code presented to solve sudoku problems. To add to that discussion, I will demonstrate several features that are new to Mathematica Version 12.1, including how this game can be solved as an integer optimization problem using the function LinearOptimization, as well as how you can generate new sudoku games.
In a typical sudoku game, the player is presented with a 9×9 grid/board with some numbers exposed in certain positions of the board.
This is an example of a standard sudoku board:
The player is supposed to fill the empty spots with numbers between 1 and 9 to if it’s an board) on the board following three rules:
1. Each row must contain all the numbers 1–9.
2. Each column must contain all the numbers 1–9.
3. Each 3×3 block (shown as gray or white blocks) must contain all the numbers 1–9.
Applying these three rules, the player must now fill the board such that none of the rules are violated.
I will make use of SparseArray to represent the initial sudoku puzzle, building on the “Sudoku Game” example for LinearOptimization:
✕
initialSudokuBoard = SparseArray[{{1, 3} -> 5, {1, 4} -> 3, {2, 1} -> 8, {2, 8} -> 2, {3, 2} -> 7, {3, 5} -> 1, {3, 7} -> 5, {4, 1} -> 4, {4, 6} -> 5, {4, 7} -> 3, {5, 2} -> 1, {5, 5} -> 7, {5, 9} -> 6, {6, 3} -> 3, {6, 4} -> 2, {6, 8} -> 8, {7, 2} -> 6, {7, 4} -> 5, {7, 9} -> 9, {8, 3} -> 4, {8, 8} -> 3, {9, 6} -> 9, {9, 7} -> 7}, {9, 9}, _]; ResourceFunction["DisplaySudokuPuzzle"][initialSudokuBoard] |
To solve this problem as an integer optimization problem, let be the variable for element . Let be the element of vector . When , then holds the number . Each contains only one number, so can contain only one nonzero element, i.e. :
✕
Clear[z]; squareConstraints = Table[{Total[z[i, j]] == 1, 0 \[VectorLessEqual] z[i, j] \[VectorLessEqual] 1, z[i, j] \[Element] Vectors[9, Integers]}, {i, 9}, {j, 9}]; |
Applying the first sudoku rule, each row must contain all the numbers, i.e. , where is a nine-dimensional vector of ones:
✕
onesVector = ConstantArray[1, 9]; rowConstraints = Table[Sum[z[i, j], {j, 9}] == onesVector, {i, 9}]; |
The second rule says that each column must contain all the numbers, i.e. :
✕
columnConstraints = Table[Sum[z[i, j], {i, 9}] == onesVector, {j, 9}]; |
The third rule says that each 3×3 block must contain all the numbers, i.e. :
✕
blockConstraints = Table[Sum[z[i + m, j + n], {m, 3}, {n, 3}] == onesVector, {i, {0, 3, 6}}, {j, {0, 3, 6}}]; |
Collectively, these make the sudoku constraints for any puzzle:
✕
sudokuConstraints = {squareConstraints, rowConstraints, columnConstraints, blockConstraints}; |
Collect all the variables:
✕
vars = Flatten[Table[z[i, j], {i, 9}, {j, 9}]]; |
Convert the known values into constraints. If element holds number , then :
✕
knownConstraints = MapThread[ Indexed[z @@ #1, #2] == 1 &, {initialSudokuBoard[ "NonzeroPositions"], initialSudokuBoard["NonzeroValues"]}]; |
LinearOptimization is typically used to minimize a linear objective subject to a set of linear constraints. In this case, the objective is 0 since there is no objective other than interest in a feasible solution:
✕
res = LinearOptimization[0, {sudokuConstraints, knownConstraints}, vars]; Short[res, 3] |
To know which number goes into which position, the information must be extracted from the vectors . This is easily done as:
✕
Short[pos = MapThread[List @@ #1 -> Range[9].#2 &, {vars, vars /. res}], 4] |
Visualize the result by converting the previous output into a SparseArray:
✕
ResourceFunction["DisplaySudokuPuzzle"][SparseArray[pos]] |
As you can see, putting the problem together and solving it took 6–7 lines of code. This procedure has been placed as a ResourceFunction called SolveSudokuPuzzle that users can call to solve a sudoku puzzle:
✕
ResourceFunction["SolveSudokuPuzzle"][initialSudokuBoard] |
This function has been made quite general and has the capacity to solve sudoku puzzles of arbitrary size. The solver also accepts negative numbers being present on the board. If a negative number exists, then the solver tries to solve the puzzle with the assumption that the number at that position cannot exist.
The strategy we will use to generate a sudoku puzzle is to start with a full board. From this, an element will be randomly selected and the number that lies at that element will be removed. We will then enforce a condition that the number we removed from that element cannot lie at that element. If the solver comes back with a solution despite the additional condition, it means that the number at that position is not unique and cannot leave the board. If the solver comes back with a failed result, then that number at that position is unique and can be removed.
To implement this strategy, there needs to be a way to generate a full random sudoku board. There are several approaches that one can use to generate a full sudoku board. One approach would be to randomly specify the diagonal entries of the sudoku board and allow the solver to generate a puzzle for us:
✕
fullSudokuPuzzle = ResourceFunction["SolveSudokuPuzzle"][ SparseArray@DiagonalMatrix[RandomSample[Range[9]]]]; ResourceFunction["DisplaySudokuPuzzle"][fullSudokuPuzzle] |
This will generate three hundred thousand possible puzzles. One advantage our solver has is that we can also specify that certain numbers cannot be present at a particular position. This is done by making that number negative at that position. Taking advantage of this feature, over one hundred million puzzles can be generated by modifying the procedure:
✕
initialPuzzle = SparseArray@DiagonalMatrix[ RandomSample[Range[9]]*RandomChoice[{1, -1}, 9]]; refSudokuMat = ResourceFunction["SolveSudokuPuzzle"][initialPuzzle]; ResourceFunction["DisplaySudokuPuzzle"][refSudokuMat] |
Of course, this is still a very small fraction of the total possible boards, but it is a start.
Now that we have a full board, let us assume that we want to keep only 50 elements from the board. The iterative code would be:
✕
minElementsToKeep = 50; sudokuElements = RandomSample[Thread[ refSudokuMat["NonzeroPositions"] -> refSudokuMat["NonzeroValues"]]]; n = 81; i = 1; While[Length[sudokuElements] > minElementsToKeep && i < n, newElements = sudokuElements; newElements[[i, 2]] *= -1; res = ResourceFunction["SolveSudokuPuzzle"][ SparseArray[newElements, {9, 9}]]; If[res === $Failed, sudokuElements = Delete[sudokuElements, i]; n--, i++];]; |
Note the extra condition, where numbers that cannot appear at certain positions are removed by making those numbers negative. We can now display our freshly minted sudoku puzzle:
✕
sudokuPuzzle = SparseArray[sudokuElements, {9, 9}, _]; ResourceFunction["DisplaySudokuPuzzle"][sudokuPuzzle] |
It’s possible to double-check that the puzzle can be solved and that the result we get back is the same as the reference sudoku we started with:
✕
ResourceFunction["DisplaySudokuPuzzle"][#] & /@ {refSudokuMat, ResourceFunction["SolveSudokuPuzzle"][sudokuPuzzle]} |
Notice that the solved puzzle recovered the reference puzzle.
A ResourceFunction called GenerateSudokuPuzzle has been developed for the user’s convenience that will generate sudoku puzzles of different sizes and determine how many elements need to be exposed:
✕
{fullBoard, sudokuPuzzle} = ResourceFunction["GenerateSudokuPuzzle"][3, 0.4] |
✕
ResourceFunction["DisplaySudokuPuzzle"][#] & /@ {fullBoard, sudokuPuzzle} |
Due to the general nature of the function, sudoku boards can be generated in different sizes. Here is a 4×4 board:
✕
{fullBoard, sudokuPuzzle} = ResourceFunction["GenerateSudokuPuzzle"][2, 0.5]; ResourceFunction["DisplaySudokuPuzzle"][#] & /@ {fullBoard, sudokuPuzzle} |
Next is a 16×16 board. The computation time to generate boards increases considerably with size because there are now 256 binary vectors of length 16 (as opposed to 81 vectors of length 9 for the 9×9 case). The following one took about 30 seconds to generate (but will change for every run):
✕
{fullBoard, sudokuPuzzle} = ResourceFunction["GenerateSudokuPuzzle"][4, 0.6]; ResourceFunction["DisplaySudokuPuzzle"][#] & /@ {fullBoard, sudokuPuzzle} |
I will be honest: I did not have the courage to solve this sudoku puzzle. I would love to hear from you if you have attempted to solve one of these large puzzles!
The avid player will probably ask the next obvious question: “What is the difficulty level of the previous puzzle?” This is a tricky question to answer, and I believe it is subjective. However, we can attempt to rank a generated puzzle between 1 and 10, with 1 being easy and 10 being very hard, by looking at how many positions in the board can have their elements uniquely identified by using the three rules and gradually filling the board till no unique elements are present.
So, for a sudoku puzzle with 40% of elements exposed, the difficulty level will be:
✕
{fullBoard, sudokuPuzzle} = ResourceFunction["GenerateSudokuPuzzle"][3, 0.4]; ResourceFunction["EstimateSudokuDifficultyLevel"][sudokuPuzzle] |
You could generate a puzzle by allowing the puzzle generator to return its hardest possible puzzle by specifying the number of exposed elements to be 0. Of course, that will not be possible, so the generator will return its best puzzle that can be solved uniquely:
✕
{fullBoard, sudokuPuzzle} = ResourceFunction["GenerateSudokuPuzzle"][3, 0.]; ResourceFunction["EstimateSudokuDifficultyLevel"][sudokuPuzzle] |
Of course, every run will yield a different number and puzzle. This is the hard puzzle that the generator returned:
✕
ResourceFunction["DisplaySudokuPuzzle"][sudokuPuzzle] |
The killer sudoku game is a variant of the original. It follows the same three rules of the original game, but instead of having numbers specified at certain positions, the player is provided with a board that looks like this:
Each color group is called a “cage,” and a number is provided for each cage. This number represents the sum of all the numbers in that cage. For example, the top-left cage contains the number 26 and consists of four red squares. This means that the total of the numbers in those four red squares must equal 26.
Within our framework, this is actually remarkably easy to do. The trick in solving the killer sudoku puzzle using LinearOptimization is to associate each of the binary vectors with another variable that actually contains the number at that position. This is done by adding the following set of constraints in addition to the sudoku solver constraints:
✕
Short[Table[Indexed[y, {i, j}] == Range[9].z[i, j], {i, 9}, {j, 9}], 2] |
There is a ResourceFunction called SolveKillerSudokuPuzzle that incorporates this additional constraint and solves the provided puzzle.
Of course, there still needs to be a way to create the killer sudoku board. My approach was to generate random Tetris block–like patterns and then use MorphologicalComponents to extract the various blocks (I am eager to hear from readers about their creative approaches to generating a killer sudoku puzzle). The approach I outlined lives as a ResourceFunction called GenerateKillerSudokuPuzzle and allows us to generate the required information for a killer sudoku puzzle:
✕
Short[{refSudokuBoard, {cagePos, cageVals}} = ResourceFunction["GenerateKillerSudokuPuzzle"][], 4] |
It would help to visualize this puzzle, and that can be done using DisplayKillerSudokuPuzzle:
✕
ResourceFunction["DisplayKillerSudokuPuzzle"][cagePos, cageVals] |
I should point out that generating the killer sudoku puzzle is actually much easier and cheaper to generate than the traditional sudoku puzzle because there are no elements to remove. This puzzle is generated from the following reference sudoku board:
✕
ResourceFunction["DisplaySudokuPuzzle"][refSudokuBoard] |
You can manually check that the puzzle is valid by adding the numbers in the cages. Our killer sudoku puzzle can now be solved:
✕
solvedPuzzle = ResourceFunction["SolveKillerSudokuPuzzle"][cagePos, cageVals] |
During experimentation, I found that sometimes the integer optimization problem is solved within a few seconds, and sometimes it takes over 30 seconds. So, it is difficult to give a good estimate of how quickly the problem can be solved. Here is the result for this particular case:
✕
ResourceFunction["DisplaySudokuPuzzle"][solvedPuzzle] |
I have also noticed that sometimes the solved puzzle will not match the reference sudoku board. This, in my opinion, is completely fine. In my experience, the larger the cage size, the more flexibility the solver has to get a feasible solution, and the numbers, therefore, can move around. Smaller cages, on the other hand, make the problem more restrictive.
I hope I have provided you a brief glimpse into the world of optimization, especially (mixed) integer optimization, and how the optimization framework can be used to solve some fun problems. There are plenty of application examples that you can find in the documentation pages of LinearOptimization, QuadradicOptimization, SecondOrderConeOptimization, SemidefiniteOptimization and ConicOptimization.
You will surely have fun playing and creating your own killer sudoku games. I tried solving a hard one from the web, and after an hour of yelling at the paper, I realized it is just easier for the computer to do it, and, well, here we are. Feel free to share your best puzzles in the comments below, or join the conversation on Wolfram Community.
Get full access to the latest Wolfram Language functionality with a Mathematica 12.1 or Wolfram|One trial. |
My name is Tigran Ishkhanyan, and I am a special functions specialist in the Algorithms R&D department at Wolfram Research, working on general problems of the theory and advanced methods of special functions. I joined Wolfram at the beginning of 2018 when I was working on my PhD project in mathematical physics at the University of Burgundy, France, and at the Institute for Physical Research, Armenia.
My PhD project had two major directions: improvement of the theory of Heun functions and their application in quantum mechanics, specifically in the problems of quantum control in two-level systems and relativistic/nonrelativistic wave equations. I came up with the idea of implementing Heun functions into the Wolfram Language when I found out that this functionality had not yet been introduced.
Every high-school student is familiar with simple functions such as Exp, Log, Sin and others—the so-called elementary functions. These functions are well studied and we know all their properties, but from time to time we are able to implement into the Wolfram Language something completely new and insightful like the ComplexPlot3D function that might be useful for educational and scientific purposes.
For example, here is the familiar sinusoidal plot for Sin:
✕
Plot[Sin[x], {x, -6 \[Pi], 6 \[Pi]}, PlotStyle -> Red] |
And here is a Plot of the same function in the complex plane:
✕
ComplexPlot3D[Sin[z], {z, -4 \[Pi] - 2 I, 4 \[Pi] + 2 I}, PlotLegends -> Automatic] |
The special functions group is another subset of mathematical functions coming after the elementary ones. Special functions have been widely used in mathematical physics and related problems during the last few centuries. For example, the Bessel functions that describe the Fraunhofer diffraction and many other phenomena are special functions. In particular, the oscillatory behavior of BesselJ makes it suitable for modeling the oscillations of drums:
✕
Plot[Evaluate[Table[BesselJ[n, x], {n, 1, 3}]], {x, -10, 10}, Filling -> Axis] |
In general, the Bessel-type functions, orthogonal polynomials and others are grouped in the class of hypergeometric functions: they are particular or limiting cases of different hypergeometric functions. The class of hypergeometric functions has a well-defined hierarchy, with the Hypergeometric2F1 and HypergeometricPFQ functions standing at the top of this class. The systematic treatment of these functions was first given by Carl Friedrich Gauss.
From the mathematical point of view, the general theory of hypergeometric functions is well developed. These functions had a significant impact in science (please explore the documentation pages of the hypergeometric functions for examples of applications).
There is also a group of advanced special functions. The Mathieu, spheroidal, Lamé and Heun functions are more general than the Hypergeometric2F1 function, so they are potent enough to solve more complex physical problems like the Schrödinger equation with a periodic potential:
✕
sol = DSolveValue[-w''[z] + Cos[z] w[z] == ℰ w[z], w[z], z] |
We have the Mathieu and spheroidal functions in the Wolfram Language, but what we didn’t yet have was the class of Heun (and as a particular case, the Lamé, or ellipsoidal, spherical harmonics) functions. We have implemented the missing group of Heun functions to achieve greater completion in covering the area of named special functions, as most of them are either particular or limiting cases of Heun functions. Its rising popularity in the literature indicates that the Heun class of functions is probably the next generation of special functions that will serve as a framework for future scientific developments. (For some nice references, please check the bibliography section of the Heun Project.)
There are two major directions of development for mathematical functions in the Wolfram Language: improved documentation for the functionality that is already in the system and implementation of new features, including new functions, methods and techniques of calculations.
In the first direction, we have recently standardized and significantly improved the documentation pages for the 250+ mathematical functions based on a large collection of more than 5,000 examples so that documentation pages now look like small, well-structured handbooks:
In the direction of introducing new features, we have implemented powerful asymptotic tools like Asymptotic, AsymptoticDSolveValue and AsymptoticIntegrate. For Version 12.1, we have introduced 10 new Heun functions that are the most general special functions at the moment.
I will take a short detour and discuss the relation between mathematical functions and differential equations, since this provides the foundation for my approach to the Heun and other special functions.
Many classical elementary and special functions are particular solutions of differential equations. Indeed, many of these functions were first introduced in an attempt to solve differential equations that arose in physics, astronomy and other fields. Thus, they may be viewed as being generated by the associated differential equations.
For example, the exponential function is generated by a simple first-order differential equation:
✕
DSolveValue[{w'[z] == w[z], w[0] == 1}, w[z], z] |
Similarly, the following linear second-order differential equation generates the Legendre polynomials:
✕
DSolveValue[ w''[z] + (2 z )/(z^2 - 1) w'[z] - (n (n + 1))/(z^2 - 1) w[z] == 0, w[z], z] |
I am a big fan of the idea of working directly with the differential equations instead of their particular solutions; this approach is much more beneficial, as differential equations are considered to be large data structures, and we are able to mine a lot of additional information about mathematical functions from their generating differential equations.
Now, the classification of linear differential equations is tightly connected with the structure of their singularities or singular points that might either be regular or irregular: these are the points in the complex plane where the coefficients of the differential equations diverge.
For the famous Bessel differential equation:
✕
BesselEq = w''[z] + 1/z w'[z] + (z^2 - n^2)/z^2 w[z]; |
… that defines the Bessel functions:
✕
DSolveValue[BesselEq == 0, w[z], z] |
… the point is a regular singular point.
We may generate the solution of a linear differential equation at regular singular points using the Frobenius method, i.e. the power-series method that generates infinite-term expansions with coefficients that obey recurrence relations uniquely defined by the differential equation. The powerful AsymptoticDSolveValue function gives exactly these Frobenius solutions:
✕
AsymptoticDSolveValue[BesselEq == 0, w[z], {z, 0, 4}] |
Here the first Frobenius solution (the regular one at the singular point) is called BesselJ while the second one (the singular one) is called BesselY. Interestingly, this is a rather common situation in the theory of special functions. Of course there are exceptions to this rule, but usually special functions are Frobenius solutions of their generating equations at some regular singular points. For the Gauss hypergeometric equation that is the most general differential equation with three regular singular points located at , and :
✕
HypergeometricEq = w''[z] + (c/z + (1 + a + b - c)/(z - 1)) w'[z] + (a b)/(z (z - 1)) w[z]; |
One of these Frobenius solutions (the regular one) is called Hypergeometric2F1 and is one of the most famous functions in physics:
✕
DSolveValue[HypergeometricEq == 0, w[z], z] |
Naturally, the second solution in this output (i.e. the singular one with the pre-factor power function) is the second Frobenius solution of the Gauss hypergeometric equation.
The Hypergeometric2F1 function is an infinite series; the coefficients of this series obey a two-term recurrence relation of the form :
✕
Series[Hypergeometric2F1[a, b, c, x], {x, 0, 3}] |
… and there is an exact closed-form expression for the nth coefficient of the expansion. This is a common feature for all the hypergeometric functions.
But an important remark is that for advanced special functions (like the Heun functions), the coefficients of their Frobenius expansions obey recurrence relations of at least three terms. There are no general closed-form expressions for these functions. We do not know their explicit forms and obviously are forced to work with their generating equations that have one more singular point. This additional regular singular point leads to a significant complication of the solutions.
At last, after this brief diversion into the theory of special functions, we are ready to proceed and present the Heun functions.
Heun’s general differential equation is a second-order linear ordinary differential equation with four regular singular points located at , , and on the complex plane:
✕
HeunEq = w''[ z] + (\[Gamma]/z + \[Delta]/(z - 1) + ( 1 + \[Alpha] + \[Beta] - \[Gamma] - \[Delta])/(z - a)) w'[ z] + (\[Alpha] \[Beta] z - q)/(z (z - 1) (z - a)) w[z]; |
The general Heun equation is a generalization of the Gauss hypergeometric equation with one more additional regular singular point located at (which is complex), so this equation is a direct generalization of the hypergeometric one with just one more regular singular point. This equation was first written in 1889 by Karl Heun, who was a German mathematician.
There is only one book and one chapter in the Digital Library of Mathematical Functions, plus around three hundred articles on different properties and applications of these general special functions. The theory of Heun functions is poorly developed, and a lot of important questions are still open but are being actively investigated.
The general Heun equation has six parameters. Four of them () are the characteristic exponents of Frobenius solutions at different singular points:
✕
AsymptoticDSolveValue[HeunEq == 0, w[z], z -> ∞] // FullSimplify |
The parameter stands for the third regular singular point, while the parameter —referred to as an accessory or spectral parameter—is an extremely important parameter that is not available in the case of hypergeometric functions.
In analogy with the hypergeometric equation, the regular Frobenius solution of the general Heun equation at a regular singular origin is called HeunG. It has the value of 1 at the origin and branch-cut discontinuities in the complex plane running from to and from to DirectedInfinity:
✕
DSolveValue[HeunEq == 0, w[z], z] |
The following shows a plot of the Heun functions for a range of values for the parameter :
✕
{a, \[Alpha], \[Beta], \[Gamma], \[Delta]} = {4 + I, -0.6 + 0.9 I, -0.7 I, -0.18 - 0.03 I, 0.3 + 0.6 I}; |
✕
Plot[Evaluate[ Table[Abs[ HeunG[a, q, \[Alpha], \[Beta], \[Gamma], \[Delta], z]], {q, -20, -3, 1}]], {z, -3/10, 9/10}, PlotStyle -> Table[{Hue[i/20], Thickness[0.002]}, {i, 20}], PlotRange -> All, Frame -> True, Axes -> False] |
HeunG is simplified to Hypergeometric2F1 for the following sets of the parameters:
A small but important remark here is that, although the closed forms of the Heun functions are unknown, different features of these functions might be revealed from the differential equations. For example, the transformation group of the HeunG function has 192 members (in total, 192 different local solutions for the general Heun equation, written in terms of a single HeunG function).
Unlike the hypergeometric functions whose derivatives are hypergeometric functions with shifted parameters, the derivatives of the Heun functions are special functions of a more complex class solving more complex differential equations. These derivatives were implemented as separate functions in Version 12.1. The derivative of HeunG is HeunGPrime:
✕
D[HeunG[a, q, \[Alpha], \[Beta], \[Gamma], \[Delta], z], z] |
This pair of functions can be used to calculate the higher derivatives of HeunG using the differential equation to eliminate derivatives of order higher than one:
✕
D[HeunG[a, q, \[Alpha], \[Beta], \[Gamma], \[Delta], z], {z, 2}] // Simplify |
Another feature is that indefinite integrals of Heun functions cannot be expressed in terms of elementary or other special functions:
✕
Integrate[HeunG[a, q, \[Alpha], \[Beta], \[Gamma], \[Delta], z], z] |
Like the Hypergeometric2F1 function, HeunG has confluent cases when one or more of the regular singular points in the general Heun equation coalesce, generating equations with a different structure of singularities. We recall that Hypergeometric2F1 has one confluent case: the Hypergeometric1F1 function. HeunG has four confluent modifications called HeunC, HeunD, HeunB and HeunT solving the single-, double-, bi- and tri-confluent Heun equations, respectively.
HeunC has an invaluable importance as it generalizes the MathieuC and MathieuS functions, as well as others like the BesselI and Hypergeometric2F1 functions:
A noteworthy example is that HeunC solves the generalized spheroidal equation in its general form without specification of the parameter :
✕
sol = DSolveValue[(1 - z^2) w''[z] - 2 z w'[z] + (\[Lambda] + \[Gamma]^2 (1 - z^2) - m^2/(1 - z^2)) w[ z] == 0, w[z], z, Assumptions -> {\[Gamma] > 0, m > 0}] |
✕
Plot[Abs[sol /. {m -> 4/3, \[Gamma] -> 7/2} /. {C[1] -> 1/3, C[2] -> 1/3} /. \[Lambda] -> {-2, -1, 0, 1, 2}] // Evaluate, {z, -3/4, 3/4}] |
HeunD is the standard series solution of the double-confluent Heun equation at the ordinary point :
✕
Plot3D[Abs[ HeunD[q, 0.2 + I, -0.6 + 0.9 I, -0.7 I, 0.3 + 0.6 I, z]], {q, -20, 2}, {z, 1/2, 2}, ColorFunction -> Function[{q, z, HD}, Hue[HD]], PlotRange -> All] |
The HeunB function solves the bi-confluent Heun equation:
✕
sol = DSolve[ y''[z] + (\[Gamma]/z + \[Delta] + \[Epsilon] z) y'[ z] + (\[Alpha] z - q)/z y[z] == 0, y[z], z] |
It has the following approximations around :
✕
terms=Normal@Table[Series[HeunB[1/31,9/10,1/10,1/10,3/2,z],{z,0,m}],{m,1,5,2}] |
Here is a plot of the approximations:
✕
Plot[{HeunB[1/31, 9/10, 1/10, 1/10, 3/2, z], terms}, {z, -6, 3}, PlotRange -> {-4, 8}, PlotLegends -> {"HeunB[q, \[Alpha], \[Gamma], \[Delta], \[Epsilon], \ z]", "1st approximation", "2nd approximation", "3rd approximation"}] |
HeunB is truly useful, as different problems of classical and quantum physics are solved using this function. For example, the whole family of doubly anharmonic oscillator potentials (or, in fact, an arbitrary potential up to sixth-order polynomial form):
✕
V[x_] := \[Mu] x^2 + \[Lambda] x^4 + \[Eta] x^6 Plot[V[x] /. {\[Mu] -> -7, \[Lambda] -> -5, \[Eta] -> 1}, {x, -3, 3}] |
… is solved in terms of the HeunB function:
✕
DSolve[-w ''[z] + V[z] w[z] == ℰ w[z], w[z], z] |
… while the problem of normalizable bound states is still unsolved.
The last confluent Heun function, the HeunT function, which might be considered as a generalization of the Airy functions, is the solution of the tri-confluent Heun equation:
✕
DSolve[ y''[ z] + (\[Gamma] + \[Delta] z + \[Epsilon] z^2) y'[ z] + (\[Alpha] z - q) y[z] == 0, y[z], z] |
HeunT solves the classical anharmonic oscillator problem (in fact, the quartic potential):
✕
sol = DSolve[ u''[z] + (Subscript[\[Lambda], 1] + Subscript[\[Lambda], 2] z^2 + Subscript[\[Lambda], 4] z^4) u[z] == 0, u[z], z] |
We are able to simulate the dynamics of the oscillator using HeunT functions:
✕
{Subscript[\[Lambda], 1], Subscript[\[Lambda], 2], Subscript[\[Lambda], 4]} = {1, 1/2, 1/4}; Plot[{u[z] /. sol /. {C[1] -> 1, C[2] -> 1}}, {z, 0, 9/2}] |
Surprisingly (or not?) the “primes” of the Heun functions are independent actors and have important applications in science.
The Wolfram Language also has the MeijerG superfunction, with a powerful tool set and wide variety of features:
✕
MeijerG[{{}, {}}, {{v}, {-v}}, z] |
Unfortunately, the MeijerG representations of special functions are limited to the hypergeometric class of functions and are not applicable in the Heun case (as well as Mathieu and spheroidal cases).
These and a lot of other interesting examples on the properties and applications of the Heun functions are noted in the documentation pages.
Heun functions have a range of applications in contemporary physics and are powerful enough to generate solutions for a significant set of unsolved problems from quantum mechanics, the theory of black holes, conformal field theory and others. They are being successfully applied in real physical problems at a rapid rate: during the last decade, the number of publications related to the theory of Heun functions tripled in comparison with all other publications until 2010, according to arXiv.
Specifically, the powerful apparatus of the Heun functions allows derivation of new infinite classes of integrable potentials for relativistic and nonrelativistic wave equations used in different problems of quantum control and engineering (please see the recent paper by A. M. Ishkhanyan for different examples).
Heun functions appear in the theory of Kerr–de Sitter black holes and may be used for analysis in more complex geometries (the papers by R. S. Borissov and P. P. Fiziev and H. Suzuki, E. Takasugi and H. Umetsu discuss these problems).
The relationship between the Heun class of equations and Painlevé transcendents leads to new results for the two-dimensional conformal field theory based on the analysis of the solutions of Heun equations (see the papers of B. C. da Cunha and J. P. Cavalcante and F. Atai and E. Langmann).
The aforementioned examples as well as others indicate that the Heun functions are important in and popular for solving absolutely different problems in contemporary physics.
At Wolfram, we are in a constant search for fresh ideas and methods that make the Wolfram Language one of the most famous, popular, powerful and user-friendly tools for scientists working in different areas of contemporary science.
From time to time, the mathematical toolset has to be updated to meet new problems and challenges. Twentieth-century quantum mechanics is closely related to the hypergeometric class of functions, but the set of problems solvable with these special functions is largely exhausted, so a new generation of functions is needed. This is why for Version 12.1 of the Wolfram Language, we implemented the Heun functions and plan to continually improve the coverage of advanced special functions to meet more complex scientific challenges in the future.
Get full access to the latest Wolfram Language functionality with a Mathematica 12.1 or Wolfram|One trial. |
Mathematica 12 has powerful functionality for solving partial differential equations (PDEs) both symbolically and numerically. This article focuses on, among other things, the finite element method (FEM)–based solver for nonlinear PDEs that has been newly implemented in Version 12. After briefly reviewing basic syntax of the Wolfram Language for PDEs, including how to designate Dirichlet and Neumann boundary conditions, we will delineate how Mathematica 12 finds the solution of a given nonlinear problem with FEM. We then show some examples in physics and chemistry, such as the Gray–Scott model and the time-dependent Navier–Stokes equation. More information can be found in the Wolfram Language tutorial “Finite Element Programming,” on which most of this article is based.
Wolfram Research社の旗艦製品であるMathematicaは，5,000 を超える組み込み関数を有するWolfram Languageを駆動する．数理モデリング，解析の基本となる常・偏微分方程式の分野においては，これらをシンボリックに，あるいは数値的に解くための強力なソルバを搭載している．最近は有限要素法(FEM) を利用した数値的求解機能が大幅に強化され，偏微分方程式(PDE)を任意の領域上で解いたり，固有値・固有関数を求めたりすることが可能となった．ここでは，最新のバージョン12における非線形偏微分方程式のFEMによる求解を中心に，現実的な問題に応用する上での流れを例とともに紹介する．なお，有限要素法を用いて非線形PDEを解くワークフローの詳細，コードはすべて公開されている．MathematicaのWolframドキュメント内で，チュートリアル“FiniteElementProgramming”を参照いただきたい．
Wolfram Languageにおいて微分方程式を数値的に解く際の関数(コマンド)は，NDSolve，あるいはNDSolveValueのふたつある．これら二つは出力のフォーマットが若干異なるだけで，中の処理はまったく同じものなので，以下，本文中では表記が短い"NDSolve", コード例中では出力の扱いが簡便な"NDSolveValue" で表記する．Mathematica上でFEMを利用するには，パッケージをロードする :
✕
Needs["NDSolve`FEM`"] |
あとは，NDSolveにPDE，領域，初期・境界条件を与えるだけである．たとえば，単位円上のポアソン方程式–∇^{2}u = 1を例にとると，境界条件としてx ≥ 0にある境界でu = 0とすると，
✕
eqn = -Inactive[Div][Inactive[Grad][u[x, y], {x, y}], {x, y}] == 1; Subscript[\[CapitalOmega], D] = Disk[]; Subscript[\[CapitalGamma], D] = DirichletCondition[u[x, y] == 0, x >= 0]; usol = NDSolveValue[{eqn, Subscript[\[CapitalGamma], D]}, u, {x, y} \[Element] Subscript[\[CapitalOmega], D]] |
で解が得られ，
✕
Plot3D[usol[x, y], {x, y} \[Element] Subscript[\[CapitalOmega], D]] |
とプロットできる．
NDSolveの中の有限要素法が現在適用可能な偏微分方程式は次の形をもたなければならない :
ここで，解くべき従属変数uが^{n}上の1次元関数のときは，m, d, a, fはスカラー，α, γ, βはn次元ベクトル，cはn*n行列である．また，c, a, f, α, γ, βはt, x ∈ ^{n}, u, ∇uなどに依存してよいが，m, dはxに対してのみの依存性をもつ．複数の従属変数u ∈ ^{d}に対する式を連立させる場合は，式 (1)でγ, fはd次元のベクトル，その他の係数はベクトルを成分にもつ行列となる．ただし，微分演算子の作用は，係数行列やベクトルuとの演算の際は，行列・ベクトルの成分内での演算に受け継がれる．たとえば，∇(u_{1}, …, u_{d})^{T} = (∇u_{1}, …, ∇u_{d})^{T} や，
といった具合である．
自然科学から工学的応用において現れる多くのPDEは，式(1)の特別な場合である．たとえば，波動方程式は，mとc以外の係数をすべてゼロにした場合であるし，非圧縮性の流体の速度，圧力場u = (v, p)^{T} ∈ ^{4}を記述するNavier–Stokesの式
は，
により，式(1)の形に表現できる．以下，しばらくは時間に依存しない問題を考え，空間次元に関するFEMを扱う．時間に依存する問題については 3節の最後で簡単に説明し，4.3節と4.4節で例をあげる．
重要なのは，FEMが適用可能と判断されるためには，NDSolveに与える式が各係数の依存性を含めて式(1)の形(“Coefficient Form”)として認識されることである．簡単な例として，
を考えてみよう．これは，式(1)でc = –∇u, f = –4として他の係数をすべてゼロとしたものに相当する．しかし，NDSolveに与えるPDEとして，
✕
Div[{{Derivative[1][u][x]}}.Grad[u[x], {x}], {x}] + 4 == 0 |
を入力すると，NDSolveが処理を開始する前にPDEが評価され，結果として式(1)第1項は2u´ (x)u´´ (x)とみなされてしまう．これは式(1)の Coefficient Formの形ではないため，FEMにより解くことができない(u´´ (x)がu´ (x)の係数と認識され，係数が2階導関数に依存することになってしまう)．NDSolveに式(1)の形のまま渡すには，Inactive，あるいはInactivateを用いて，
✕
Inactive[Div][{{Derivative[1][u][x]}}.Inactive[Grad][ u[x], {x}], {x}] + 4 == 0 |
のように∇の評価を保留しておけばよい．
任意の次元の任意の領域が指定可能である．上のポアソン方程式の例のように単純な図形であれば，DiskやPolygonなどを組み合わせて作ることもできるし，等式や不等式で表される領域であれば，ParametricRegion, ImplicitRegionなどが利用できる．さらに，写真などから作成した領域指定画像をImageMeshによりNDSolveで利用可能な領域データに変換することも可能である．
境界∂Ω上での関数値を直接与えるディリクレ境界条件は，
✕
NDSolveValue[{PDE (s) for f[x, y], DirichletCondition[f[x, y] == bc, predicate]}, f, {x, y} \[Element] \[CapitalOmega]] |
のように，PDEとともに指定するだけである．ここでbcは境界での値を与える関数，predicateはf(x, y)=bcが満たされるべき境界を指定する．predicateをTrueのみにすれば，∂Ω全体が指定される．
一般化されたノイマン境界条件(ロビン(Robin)境界条件)は，NeumannValueで指定する．ロビン境界条件は境界を外向き垂直に貫く流束(flux)の成分を次の形で規定する：
である．は∂Ω上の外向き法線(単位)ベクトル，右辺のg–quがユーザが与える値である．ただし，NeumannValueはDirichletConditionとは指定方法が異なるので注意が必要である．これは，有限要素近似において，PDEにテスト関数ϕをかけて領域Ωで積分して弱形式を得ることに由来している．式(1)第1項にϕをかけて積分すると， ·(–c∇u – αu + γ)の項は，
となる．境界∂Ω上の積分の被積分関数が，ちょうどロビン境界条件で指定されるべきものに相当している．このため，この項をg–quの積分で置き換えることで，NDSolveがこの境界条件を正しく扱うことができる．
たとえば，単位円境界のx ≤ 0の領域においてu(x,y) = 0, x ≥ 1/2においては ·∇u = xy^{2}というディリクレ条件とノイマン条件が課せられたラプラス方程式–∇^{2}u = 0を解くには，
✕
\[CapitalOmega] = Disk[]; Subscript[\[CapitalGamma], D] = DirichletCondition[u[x, y] == 0, x <= 0]; usol = NDSolveValue[{-Div[Grad[u[x, y], {x, y}], {x, y}] == NeumannValue[x*y^2, x >= 1/2], Subscript[\[CapitalGamma], D]}, u, {x, y} \[Element] \[CapitalOmega]] |
とすればよい．(この場合のPDEはあいまいさなくCoefficient Formと認識されるので微分演算子をInactiveにする必要はないが，Inactive[Div], Inactive[Grad]としてももちろんかまわない．)
Divの前の負号は式(1)中のcを1とするためのもので，これによりノイマン条件 ·c∇u = ·∇u = xy^{2}がそのままNeumannValueに入力できる．解をプロットすれば，次のようになる．
✕
Plot3D[usol[x, y], {x, y} \[Element] \[CapitalOmega]] |
留意すべきは，NeumannValueが式(1)のPDEをベースにした式(4)の形で決められるため，ノイマン条件を「手で」調整する必要がある場合があることである．たとえば，ポアソン方程式–∇^{2}u + 1/5 = 0に対して，ノイマン条件 ·∇u = xy^{2}(x ≥ 1/2)をディリクレ条件u(x,y) = 0(x ≤ 0)とともに課すにあたり，NDSolveに与えるPDE自体を–5 ∇^{2}u + 1 = 0としたとしよう．すると，NDSolve内では式(1)においてc = 5と認識するため，·c∇uに相当するNeumannValueは5xy^{2}としなければならない．言われてみれば当然のことにも見えるが，ディリクレ条件と異なり，ノイマン(ロビン)条件をPDEから独立して指定するわけではないことに注意が必要である．–∇^{2}u + 1/5 = 0の場合と–5 ∇^{2}u + 1 = 0の場合を以下に示しておく．
入力式が–∇^{2}u + 1/5 == 0の場合
✕
\[CapitalOmega] = Disk[]; Subscript[\[CapitalGamma], D] = DirichletCondition[u[x, y] == 0, x <= 0]; |
✕
usol = NDSolveValue[{-Div[Grad[u[x, y], {x, y}], {x, y}] + 1/5 == NeumannValue[x y^2, x >= 1/2], Subscript[\[CapitalGamma], D]}, u, {x, y} \[Element] \[CapitalOmega]] |
✕
Plot3D[usol[x, y], {x, y} \[Element] \[CapitalOmega]] |
入力式が–5 ∇^{2}u + 1 == 0の場合
✕
usol = NDSolveValue[{-5 Div[Grad[u[x, y], {x, y}], {x, y}] + 1 == NeumannValue[5 x y^2, x >= 1/2], Subscript[\[CapitalGamma], D]}, u, {x, y} \[Element] \[CapitalOmega]] |
✕
Plot3D[usol[x, y], {x, y} \[Element] \[CapitalOmega]] |
のようにすればよい．ロビン条件3u + ·∇u = xy^{2}のような場合も同様で，NDSolve内のPDE左辺が–∇^{2}u + 1/5であれば，右辺は5*NeumannValue[1/2*x*y^{2} – 3/2*u[x, y], x≥1/2]とする．
FEMを適用するには対象領域内にメッシュを生成する必要があるが，これについてはここでは深くは立ち入らない．興味ある方のために簡単にまとめておくと，2 次元領域についてはTriangle，3 次元領域にはTetGenとよばれるツールを利用している．TriangleはDelaunay triangulations, constrained Delaunay triangulations, conforming Delaunay triangulationsを行い，TetGenはconstrained Delaunay tetrahedralization, boundary conforming Delaunayメッシュなどの3次元領域の四面体メッシュを生成することができる．Wolfram Languageはこれらを必要なときに自動的に利用するが，ユーザが柔軟にカスタマイズして使うこともできる．詳細は，Triangleについてはここ，そしてTetGenについてはここの説明を参照されたい．
線形PDEの場合は，PDEの弱形式から離散化を経て連立1次方程式を解くが，非線形PDEを解く際にもこれを利用する．基本的な流れは
である．つまり，非線形代数方程式のNewton–Raphson法による求解と同様のプロセスをたどる．詳細は上述のWolframドキュメント内のチュートリアルに記載があるが，簡単にまとめると以下のようになる．
まず式(1)の時間微分に関する部分を取り除くと，
であるが，
とすると，
と単純な形となる．この非線形PDEを線形化するわけだが，1 変数非線形方程式の解を数値的に求めるときのように，ある適当な関数u_{0}をシードとして，そこから∇^{2}·Γ (u^{∗}) – F (u^{∗}) = 0となる真の解u^{∗}ヘ漸近的に近づいていく．u^{∗}とu_{0}の差をr = u^{∗} – u_{0}とすると，
Γ, Fをu_{0}のまわりでテイラー展開して，O(r^{2})の高次の項を無視する近似をすると，
となる．Γ 'やF 'などの微分は，∂Γ/∂∇u, ∂Γ/∂u, ∂F/∂∇u, ∂F/∂u などを計算して得られる．これらをu_{0}において評価すると，式(9)は離散化した各点(節点)でのuに対する連立1次方程式となる．ここで初期・境界条件も合わせて連立させることで閉じた連立方程式となり，これによりrが得られる．
シードu_{0}としてはデフォルトで u(x) = 0, ∀ x ∈ Ωとされるが，これはNDSolveのオプションのひとつとして，たとえばInitialSeeding→{u[x,y]==x+Exp[-Abs[y]]}などと指定することができる．線形化による漸近的求解で意図せぬ局所解に陥る可能性を考えると，問題の背景からシードをうまく与えることも有用である．また，式(13)から残差rを求める際には，左辺に現れるヤコビアン ∇·Γ '(u_{0}) – F '(u_{0})の計算量が大きく，これが全体の計算時間に大きく影響する．このため，Wolfram Languageでは非線形FEMを適用する際にはNewton–Raphson法そのものではなく，Affine covariant Newton法を用いた上で，許容できる範囲内で前のステップで用いたヤコビアンを再利用するBroyden法を使い，ヤコビアンの計算回数を大幅に減らす策をとっている．
時間についての積分は，空間次元について離散化して連立方程式(行列)を得たあと，これを時間に関する常微分方程式とみなすことで種々の計算法が適用できる．線の方法(method of lines)や，場合によっては時間方向にもFEMを適用することも可能である．
電流のまわりには磁場が生じる．モーターのような構成で，電流をコイルに流したときどのような磁場分布が生まれるか，特に，固定子と回転子を構成する強磁性体の透磁率が磁場に依存するような非線形材料の場合で計算してみる．基礎となるモデル式は，磁場とベクトルポテンシャルの関係と，Ampereの法則である．これらをひとつにまとめて，さらにクーロンゲージをとれば，
となる．電流はz方向のみに成分をもつとし，また問題を簡単にするためベクトルポテンシャルのx成分，y成分は定数であり，透磁率はz方向に対して一定であると仮定する．すると，式(10)で意味があるのはz成分のみとなり，スカラー量u = A_{z}についてのPDEとなる;
透磁率μ(B)は，下の実測データよりフィッティングした式を用いた．
✕
ListPlot[BHData,PlotLabel->"Measured magnetic susceptibility"] |
✕
Clear[a1, a2, b1, b2, c1, c2]; model = a1 Exp[-(x - b1)^2/(c1^2)] + a2/(1 + Exp[(x - b2)^2/(c2^2)])^2; fitData = FindFit[BHData, model, {a1, b1, c1, a2, b2, c2}, x]; fit = Function[x, Evaluate[model /. fitData]] |
✕
Show[ListPlot[BHData], Plot[fit[x], {x, 0, 3}], PlotLabel -> "Fitted curve for magnetic susceptibility"] |
次の図はモータの断面を模式的に表したもので，黄色とオレンジ色の部分に画面に垂直方向に電流を流すとの仮定のもとで，磁場の強度分布を非線形PDE式(11)により計算してみる．
✕
mesh["Wireframe"[ "MeshElementStyle" -> (Directive[FaceForm[#], EdgeForm[]] & /@ {Blue, Red, Gray, Orange, LightOrange, Yellow})]] |
透磁率の設定，電流を流す要素の指定．NDSolveでは，モーター固定子の外側では磁場がゼロになるというディリクレ境界条件を課している．
✕
B2Norm = Sqrt[ Total[Grad[ u[x, y], {x, y}]^2] + $MachineEpsilon];(* norm of grad u *) \[Mu]Air = 4 \[Pi]*10^-7; \[Nu] = Piecewise[{ {-1/fit[B2Norm], ElementMarker == 2 || ElementMarker == 3} }, -1/\[Mu]Air]*IdentityMatrix[2]; |
✕
jz = Piecewise[{ {10, ElementMarker == 4}, {-10, ElementMarker == 6} }, 0]; |
✕
usol = NDSolveValue[{Inactive[Div][(\[Nu]\!\(\* TagBox[".", "InactiveToken", BaseStyle->"Inactive", SyntaxForm->"."]\)Inactive[Grad][u[x, y], {x, y}]), {x, y}] == jz, DirichletCondition[u[x, y] == 0, x^2 + y^2 >= 0.95]}, u, {x, y} \[Element] mesh] |
得られた結果をモータ構造のワイヤフレームとともに表示．
✕
{minsol, maxsol} = MinMax[usol["ValuesOnGrid"]]; Show[ ContourPlot[usol[x, y], {x, y} \[Element] mesh, PlotRange -> All, ColorFunction -> "TemperatureMap", Contours -> Range[minsol, maxsol, (maxsol - minsol)/15]], ToBoundaryMesh[mesh]["Wireframe"], VectorPlot[ Evaluate[{{0, 1}, {-1, 0}}.Grad[usol[x, y], {x, y}]], {x, y} \[Element] mesh, StreamPoints -> Coarse]] |
定常状態にある非圧縮性流体を記述するNavier–Stokes方程式
は，第1式2項めの対流項により本質的に非線形である(ここでは密度ρ = 1，外力場はゼロとした)．ここで，uはベクトルであるので，2 次元であれば，第1式は微分演算子∇が作用するのがu_{x}かu_{y}かの2式で構成される(下のコード参照)．2次元キャビティ内の速度場を計算してみよう．キャビティに充満した流体の上辺が右方向に一定の速度u_{x}で駆動されること，残りの辺における流束はゼロであること，図の左下隅での圧力がゼロであることをディリクレ条件として与えた．Wolfram Languageコードは以下のとおりである．
✕
\[Nu] =.; navierstokes = {\[Rho]*{{u[x, y], v[x, y]}}.Inactive[Grad][ u[x, y], {x, y}] + Inactive[ Div][{{-\[Nu], 0}, {0, -\[Nu]}}.Inactive[Grad][ u[x, y], {x, y}], {x, y}] + Derivative[1, 0][p][x, y], \[Rho]*{{u[x, y], v[x, y]}}.Inactive[Grad][v[x, y], {x, y}] + Inactive[ Div][{{-\[Nu], 0}, {0, -\[Nu]}}.Inactive[Grad][ v[x, y], {x, y}], {x, y}] + Derivative[0, 1][p][x, y], Derivative[0, 1][v][x, y] + Derivative[1, 0][u][x, y]}; |
✕
bcs = {DirichletCondition[u[x, y] == 2, y == 1], DirichletCondition[u[x, y] == 0, y != 1], DirichletCondition[v[x, y] == 0, True], DirichletCondition[p[x, y] == 0, x == 0 && y == 0]}; |
✕
op = navierstokes /. {\[Rho] -> 1, \[Nu] -> 1/1000}; {uVel, vVel, pressure} = NDSolveValue[{op == {0, 0, 0}, bcs}, {u, v, p}, {x, y} \[Element] Rectangle[{0, 0}, {1, 1}], Method -> {"FiniteElement", "InterpolationOrder" -> {u -> 2, v -> 2, p -> 1}, "MeshOptions" -> {"MaxCellMeasure" -> 0.0001}}]; |
得られる速度場を可視化してみよう．
✕
Show[ StreamPlot[{uVel[x, y], vVel[x, y]}, {x, 0, 1}, {y, 0, 1}, Axes -> None, Frame -> None, StreamPoints -> {Automatic, Scaled[0.02]}], ToBoundaryMesh[uVel["ElementMesh"]]["Wireframe"]] |
圧力分布は以下のようである．
✕
Plot3D[pressure[x, y], {x, 0, 1}, {y, 0, 1}, PlotRange -> {-0.5, 1.5}, PlotPoints -> 80, Boxed -> True] |
反応拡散系とよばれる，化学反応と物質の拡散による複数の物質の濃度変化をモデル化した非線形連立PDE(Gray–Scottモデル)を計算してみるのが次の例である．外部から原料となる化学物質Uが，別の物質Vで満たされた反応容器内に連続的に導入され，自己触媒反応
を経て，Uが最終生成物Pに変化し，Pは系外へ排出されるとする．UとVの濃度u, vの時間変化は
によって記述されるとするのがこのモデルである．D_{u}, D_{v}はそれぞれの拡散係数，Fは物質Uの補充率，Kは反応V→Pの速さを規定するパラメータである．以下に，式の入力から可視化(アニメ化)までのコードを示しておく．
✕
eqn = { D[u[t, x, y], t] + Inactive[Plus][ Inactive[ Div][{{-c1, 0}, {0, -c1}}.Inactive[Grad][ u[t, x, y], {x, y}], {x, y}], (v[t, x, y]^2 + f)* u[t, x, y]] == f, D[v[t, x, y], t] + Inactive[Plus][ Inactive[ Div][{{-c2, 0}, {0, -c2}}.Inactive[Grad][ v[t, x, y], {x, y}], {x, y}], (-u[t, x, y]*v[t, x, y] + f + k)*v[t, x, y]] == 0 } //. {c1 -> 2.*10^-5, c2 -> c1/4, f -> 0.04, k -> 0.06}; |
✕
ics = {u[0, x, y] == 1/2, v[0, x, y] == If[x^2 + y^2 <= 0.025, 1., 0.]}; bcs = {DirichletCondition[u[t, x, y] == 0, True], DirichletCondition[v[t, x, y] == 0, True]}; |
✕
{ufun, vfun} = NDSolveValue[{eqn, bcs, ics}, {u, v}, {x, y} \[Element] Disk[], {t, 0, 3000}, Method -> {"TimeIntegration" -> {"IDA", "MaxDifferenceOrder" -> 2}, "PDEDiscretization" -> {"MethodOfLines", "DifferentiateBoundaryConditions" -> True, "SpatialDiscretization" -> {"FiniteElement", "MeshOptions" -> {"MaxCellMeasure" -> 0.002}}}}]; |
✕
{vmin, vmax} = MinMax[vfun["ValuesOnGrid"]]; frames = Table[ Rasterize[ ContourPlot[vfun[t, x, y], {x, -1, 1}, {y, -1, 1}, Contours -> Range[vmin, vmax, (vmax - vmin)/4], PlotRange -> All], RasterSize -> Large], {t, 100, 2000, 20}]; |
✕
ListAnimate[frames] |
時間に依存する非圧縮性流体の流れを記述するのは，式(12)に時間変化の項を付け加えた
である．二枚の無限に広い平行平板の間の空間を流体が流れる状況で，この空間に無限に長い円柱を流れと垂直方向に置いたときの流速の分布を計算してみる．平板と円柱に垂直な面をxy平面として，未知数は速度(u,v)と圧力pとなる．Wolfram Languageコードは以下のとおりである．
Navier–Stokes方程式
✕
transientnavierstokes = {\[Rho]*{{u[t, x, y], v[t, x, y]}}.Inactive[ Grad][u[t, x, y], {x, y}] + Inactive[ Div][{{-\[Mu], 0}, {0, -\[Mu]}}.Inactive[Grad][ u[t, x, y], {x, y}], {x, y}] + Derivative[0, 1, 0][p][t, x, y] + \[Rho]* Derivative[1, 0, 0][u][t, x, y], \[Rho]*{{u[t, x, y], v[t, x, y]}}.Inactive[Grad][ v[t, x, y], {x, y}] + Inactive[ Div][{{-\[Mu], 0}, {0, -\[Mu]}}.Inactive[Grad][ v[t, x, y], {x, y}], {x, y}] + Derivative[0, 0, 1][p][t, x, y] + \[Rho]* Derivative[1, 0, 0][v][t, x, y], Derivative[0, 0, 1][v][t, x, y] + Derivative[0, 1, 0][u][t, x, y]}; |
流域のサイズの設定と，流入口での速度分布の設定．ある時刻で不連続に流速がゼロから非ゼロに変化しないように，滑らかな流速変化を与えるrampFunctionを定義する．流域のサイズと流速等から，ここでの流れのレイノルズ数は約200である．
✕
rules = {length -> 2.2, height -> 0.41}; \[CapitalOmega] = RegionDifference[Rectangle[{0, 0}, {length, height}], Disk[{1/5, 1/5}, 1/20]] /. rules; rmf = RegionMember[\[CapitalOmega]]; rampFunction[min_, max_, c_, r_] := Function[t, (min*Exp[c*r] + max*Exp[r*t])/(Exp[c*r] + Exp[r*t])] ramp = rampFunction[0, 1, 4, 5]; GraphicsRow[{ Show[BoundaryDiscretizeRegion[\[CapitalOmega]], VectorPlot[{4*1.5*y*(0.41 - y)/0.41^2, 0}, {x, 0, 2.2}, {y, 0, 0.41}, VectorScale -> Small, VectorStyle -> Red, VectorMarkers -> Placed["Arrow", "Start"], VectorPoints -> Table[{0, y}, {y, 0.05, 0.35, 0.075}], ImageSize -> Large]] , Plot[ramp[t], {t, -1, 10}, PlotRange -> All, ImageSize -> Large, AspectRatio -> 1/5] }] |
境界条件と初期条件
✕
op = transientnavierstokes /. {\[Mu] -> 10^-3, \[Rho] -> 1}; bcs = { DirichletCondition[ u[t, x, y] == ramp[t]*4*1.5*y*(height - y)/height^2, x == 0], DirichletCondition[u[t, x, y] == 0, 0 < x < length], DirichletCondition[v[t, x, y] == 0, 0 <= x < length], DirichletCondition[p[t, x, y] == 0, x == length]} /. rules; ic = {u[0, x, y] == 0, v[0, x, y] == 0, p[0, x, y] == 0}; |
t = 0 から10までの速度分布の変化を，tをモニターしながらNDSolveにより計算．一般的なPC(3.1GHz Intel Core i5，メモリ16GB)で要6分程度．
✕
Dynamic["time: " <> ToString[CForm[currentTime]]] AbsoluteTiming[{xVel, yVel, pressure} = NDSolveValue[{op == {0, 0, 0}, bcs, ic}, {u, v, p}, {x, y} \[Element] \[CapitalOmega], {t, 0, 10}, Method -> {"TimeIntegration" -> {"IDA", "MaxDifferenceOrder" -> 2}, "PDEDiscretization" -> {"MethodOfLines", "DifferentiateBoundaryConditions" -> True, "SpatialDiscretization" -> {"FiniteElement", "InterpolationOrder" -> {u -> 2, v -> 2, p -> 1}, "MeshOptions" -> {"MaxCellMeasure" -> 0.0002}}}}, EvaluationMonitor :> (currentTime = t;)];] |
各点での速度の絶対値をもとに色付けし，アニメーションを作成．
✕
{minX, maxX} = MinMax[Sqrt[xVel["ValuesOnGrid"]^2 + yVel["ValuesOnGrid"]^2]]; mesh = xVel["ElementMesh"]; frames = Table[ Rasterize[ ContourPlot[ Norm[{xVel[t, x, y], yVel[t, x, y]}], {x, 0, 2.2}, {y, 0, 0.41}, PlotRange -> All, AspectRatio -> Automatic, ColorFunction -> "TemperatureMap", Contours -> Range[minX, maxX, (maxX - minX)/7], Axes -> False, Frame -> None, RegionFunction -> Function[{x, y, z}, rmf[{x, y}]]], RasterSize -> 4*{360, 68}, ImageSize -> 2*{360, 68}], {t, 4, 10, 1/20}]; |
✕
ListAnimate[frames] |
領域に対して生成されたメッシュは以下のように確認できる．
✕
ToElementMesh[\[CapitalOmega], "MaxCellMeasure" -> 0.0002]["Wireframe"] |
ここまで見てきたように，Mathematica 12(Wolfram Language 12)では有限要素法の適用範囲が大幅に広がり，Navier–Stokes方程式を始めとする多くの非線形偏微分方程式の求解が可能となった．シンボリックな計算に強みを持つWolfram Languageにより，個々のPDEの形によらず，高度な一般性を保ったまま統一的な処理を効率的に実行することが可能となっている．FEM関連の内部処理の詳細が公開されていることは前述したが，同時に弾性解析，音響解析，熱・振動伝導解析など多くの分野における応用例も，Mathematicaチュートリアルで詳しく解説されているので，参考にしていただければ幸いである．
Get full access to the latest Wolfram Language functionality with a Mathematica 12.1 or Wolfram|One trial. |
It’s unexpected, surprising—and for me incredibly exciting. To be fair, at some level I’ve been working towards this for nearly 50 years. But it’s just in the last few months that it’s finally come together. And it’s much more wonderful, and beautiful, than I’d ever imagined.
In many ways it’s the ultimate question in natural science: How does our universe work? Is there a fundamental theory? An incredible amount has been figured out about physics over the past few hundred years. But even with everything that’s been done—and it’s very impressive—we still, after all this time, don’t have a truly fundamental theory of physics.
Back when I used do theoretical physics for a living, I must admit I didn’t think much about trying to find a fundamental theory; I was more concerned about what we could figure out based on the theories we had. And somehow I think I imagined that if there was a fundamental theory, it would inevitably be very complicated.
But in the early 1980s, when I started studying the computational universe of simple programs I made what was for me a very surprising and important discovery: that even when the underlying rules for a system are extremely simple, the behavior of the system as a whole can be essentially arbitrarily rich and complex.
And this got me thinking: Could the universe work this way? Could it in fact be that underneath all of this richness and complexity we see in physics there are just simple rules? I soon realized that if that was going to be the case, we’d in effect have to go underneath space and time and basically everything we know. Our rules would have to operate at some lower level, and all of physics would just have to emerge.
By the early 1990s I had a definite idea about how the rules might work, and by the end of the 1990s I had figured out quite a bit about their implications for space, time, gravity and other things in physics—and, basically as an example of what one might be able to do with science based on studying the computational universe, I devoted nearly 100 pages to this in my book A New Kind of Science.
I always wanted to mount a big project to take my ideas further. I tried to start around 2004. But pretty soon I got swept up in building Wolfram|Alpha, and the Wolfram Language and everything around it. From time to time I would see physicist friends of mine, and I’d talk about my physics project. There’d be polite interest, but basically the feeling was that finding a fundamental theory of physics was just too hard, and only kooks would attempt it.
It didn’t help that there was something that bothered me about my ideas. The particular way I’d set up my rules seemed a little too inflexible, too contrived. In my life as a computational language designer I was constantly thinking about abstract systems of rules. And every so often I’d wonder if they might be relevant for physics. But I never got anywhere. Until, suddenly, in the fall of 2018, I had a little idea.
It was in some ways simple and obvious, if very abstract. But what was most important about it to me was that it was so elegant and minimal. Finally I had something that felt right to me as a serious possibility for how physics might work. But wonderful things were happening with the Wolfram Language, and I was busy thinking about all the implications of finally having a full-scale computational language.
But then, at our annual Summer School in 2019, there were two young physicists (Jonathan Gorard and Max Piskunov) who were like, “You just have to pursue this!” Physics had been my great passion when I was young, and in August 2019 I had a big birthday and realized that, yes, after all these years I really should see if I can make something work.
So—along with the two young physicists who’d encouraged me—I began in earnest in October 2019. It helped that—after a lifetime of developing them—we now had great computational tools. And it wasn’t long before we started finding what I might call “very interesting things”. We reproduced, more elegantly, what I had done in the 1990s. And from tiny, structureless rules out were coming space, time, relativity, gravity and hints of quantum mechanics.
We were doing zillions of computer experiments, building intuition. And gradually things were becoming clearer. We started understanding how quantum mechanics works. Then we realized what energy is. We found an outline derivation of my late friend and mentor Richard Feynman’s path integral. We started seeing some deep structural connections between relativity and quantum mechanics. Everything just started falling into place. All those things I’d known about in physics for nearly 50 years—and finally we had a way to see not just what was true, but why.
I hadn’t ever imagined anything like this would happen. I expected that we’d start exploring simple rules and gradually, if we were lucky, we’d get hints here or there about connections to physics. I thought maybe we’d be able to have a possible model for the first seconds of the universe, but we’d spend years trying to see whether it might actually connect to the physics we see today.
In the end, if we’re going to have a complete fundamental theory of physics, we’re going to have to find the specific rule for our universe. And I don’t know how hard that’s going to be. I don’t know if it’s going to take a month, a year, a decade or a century. A few months ago I would also have said that I don’t even know if we’ve got the right framework for finding it.
But I wouldn’t say that anymore. Too much has worked. Too many things have fallen into place. We don’t know if the precise details of how our rules are set up are correct, or how simple or not the final rules may be. But at this point I am certain that the basic framework we have is telling us fundamentally how physics works.
It’s always a test for scientific models to compare how much you put in with how much you get out. And I’ve never seen anything that comes close. What we put in is about as tiny as it could be. But what we’re getting out are huge chunks of the most sophisticated things that are known about physics. And what’s most amazing to me is that at least so far we’ve not run across a single thing where we’ve had to say “oh, to explain that we have to add something to our model”. Sometimes it’s not easy to see how things work, but so far it’s always just been a question of understanding what the model already says, not adding something new.
At the lowest level, the rules we’ve got are about as minimal as anything could be. (Amusingly, their basic structure can be expressed in a fraction of a line of symbolic Wolfram Language code.) And in their raw form, they don’t really engage with all the rich ideas and structure that exist, for example, in mathematics. But as soon as we start looking at the consequences of the rules when they’re applied zillions of times, it becomes clear that they’re very elegantly connected to a lot of wonderful recent mathematics.
There’s something similar with physics, too. The basic structure of our models seems alien and bizarrely different from almost everything that’s been done in physics for at least the past century or so. But as we’ve gotten further in investigating our models something amazing has happened: we’ve found that not just one, but many of the popular theoretical frameworks that have been pursued in physics in the past few decades are actually directly relevant to our models.
I was worried this was going to be one of those “you’ve got to throw out the old” advances in science. It’s not. Yes, the underlying structure of our models is different. Yes, the initial approach and methods are different. And, yes, a bunch of new ideas are needed. But to make everything work we’re going to have to build on a lot of what my physicist friends have been working so hard on for the past few decades.
And then there’ll be the physics experiments. If you’d asked me even a couple of months ago when we’d get anything experimentally testable from our models I would have said it was far away. And that it probably wouldn’t happen until we’d pretty much found the final rule. But it looks like I was wrong. And in fact we’ve already got some good hints of bizarre new things that might be out there to look for.
OK, so what do we need to do now? I’m thrilled to say that I think we’ve found a path to the fundamental theory of physics. We’ve built a paradigm and a framework (and, yes, we’ve built lots of good, practical, computational tools too). But now we need to finish the job. We need to work through a lot of complicated computation, mathematics and physics. And see if we can finally deliver the answer to how our universe fundamentally works.
It’s an exciting moment, and I want to share it. I’m looking forward to being deeply involved. But this isn’t just a project for me or our small team. This is a project for the world. It’s going to be a great achievement when it’s done. And I’d like to see it shared as widely as possible. Yes, a lot of what has to be done requires top-of-the-line physics and math knowledge. But I want to expose everything as broadly as possible, so everyone can be involved in—and I hope inspired by—what I think is going to be a great and historic intellectual adventure.
Today we’re officially launching our Physics Project. From here on, we’ll be livestreaming what we’re doing—sharing whatever we discover in real time with the world. (We’ll also soon be releasing more than 400 hours of video that we’ve already accumulated.) I’m posting all my working materials going back to the 1990s, and we’re releasing all our software tools. We’ll be putting out bulletins about progress, and there’ll be educational programs around the project.
Oh, yes, and we’re putting up a Registry of Notable Universes. It’s already populated with nearly a thousand rules. I don’t think any of the ones in there yet are our own universe—though I’m not completely sure. But sometime—I hope soon—there might just be a rule entered in the Registry that has all the right properties, and that we’ll slowly discover that, yes, this is it—our universe finally decoded.
OK, so how does it all work? I’ve written a 448-page technical exposition (yes, I’ve been busy the past few months!). Another member of our team (Jonathan Gorard) has written two 60-page technical papers. And there’s other material available at the project website. But here I’m going to give a fairly non-technical summary of some of the high points.
It all begins with something very simple and very structureless. We can think of it as a collection of abstract relations between abstract elements. Or we can think of it as a hypergraph—or, in simple cases, a graph.
We might have a collection of relations like
{{1, 2}, {2, 3}, {3, 4}, {2, 4}}
that can be represented by a graph like
✕
ResourceFunction[ "WolframModelPlot"][{{1, 2}, {2, 3}, {3, 4}, {2, 4}}, VertexLabels -> Automatic] |
All we’re specifying here are the relations between elements (like {2,3}). The order in which we state the relations doesn’t matter (although the order within each relation does matter). And when we draw the graph, all that matters is what’s connected to what; the actual layout on the page is just a choice made for visual presentation. It also doesn’t matter what the elements are called. Here I’ve used numbers, but all that matters is that the elements are distinct.
OK, so what do we do with these collections of relations, or graphs? We just apply a simple rule to them, over and over again. Here’s an example of a possible rule:
{{x, y}, {x, z}} → {{x, z}, {x, w}, {y, w}, {z, w}}
What this rule says is to pick up two relations—from anywhere in the collection—and see if the elements in them match the pattern {{x,y},{x,z}} (or, in the Wolfram Language, {{x_,y_},{x_,z_}}), where the two x’s can be anything, but both have to be the same, and the y and z can be anything. If there’s a match, then replace these two relations with the four relations on the right. The w that appears there is a new element that’s being created, and the only requirement is that it’s distinct from all other elements.
We can represent the rule as a transformation of graphs:
✕
RulePlot[ResourceFunction[ "WolframModel"][{{x, y}, {x, z}} -> {{x, z}, {x, w}, {y, w}, {z, w}}], VertexLabels -> Automatic, "RulePartsAspectRatio" -> 0.5] |
Now let’s apply the rule once to:
{{1, 2}, {2, 3}, {3, 4}, {2, 4}}
The {2,3} and {2,4} relations get matched, and the rule replaces them with four new relations, so the result is:
{{1, 2}, {3, 4}, {2, 4}, {2, 5}, {3, 5}, {4, 5}}
We can represent this result as a graph (which happens to be rendered flipped relative to the graph above):
✕
ResourceFunction[ "WolframModel"][{{x, y}, {x, z}} -> {{x, z}, {x, w}, {y, w}, {z, w}}, {{1, 2}, {2, 3}, {3, 4}, {2, 4}}, 1]["FinalStatePlot", VertexLabels -> Automatic] |
OK, so what happens if we just keep applying the rule over and over? Here’s the result:
✕
ResourceFunction[ "WolframModel"][{{x, y}, {x, z}} -> {{x, z}, {x, w}, {y, w}, {z, w}}, {{1, 2}, {2, 3}, {3, 4}, {2, 4}}, 10, "StatesPlotsList"] |
Let’s do it a few more times, and make a bigger picture:
✕
ResourceFunction[ "WolframModel"][{{x, y}, {x, z}} -> {{x, z}, {x, w}, {y, w}, {z, w}}, {{1, 2}, {2, 3}, {3, 4}, {2, 4}}, 14, "FinalStatePlot"] |
What happened here? We have such a simple rule. Yet applying this rule over and over again produces something that looks really complicated. It’s not what our ordinary intuition tells us should happen. But actually—as I first discovered in the early 1980s—this kind of intrinsic, spontaneous generation of complexity turns out to be completely ubiquitous among simple rules and simple programs. And for example my book A New Kind of Science is about this whole phenomenon and why it’s so important for science and beyond.
But here what’s important about it is that it’s what’s going to make our universe, and everything in it. Let’s review again what we’ve seen. We started off with a simple rule that just tells us how to transform collections of relations. But what we get out is this complicated-looking object that, among other things, seems to have some definite shape.
We didn’t put in anything about this shape. We just gave a simple rule. And using that simple rule a graph was made. And when we visualize that graph, it comes out looking like it has a definite shape.
If we ignore all matter in the universe, our universe is basically a big chunk of space. But what is that space? We’ve had mathematical idealizations and abstractions of it for two thousand years. But what really is it? Is it made of something, and if so, what?
Well, I think it’s very much like the picture above. A whole bunch of what are essentially abstract points, abstractly connected together. Except that in the picture there are 6704 of these points, whereas in our real universe there might be more like 10^{400} of them, or even many more.
We don’t (yet) know an actual rule that represents our universe—and it’s almost certainly not the one we just talked about. So let’s discuss what possible rules there are, and what they typically do.
One feature of the rule we used above is that it’s based on collections of “binary relations”, containing pairs of elements (like {2,3}). But the same setup lets us also consider relations with more elements. For example, here’s a collection of two ternary relations:
{{1, 2, 3}, {3, 4, 5}}
We can’t use an ordinary graph to represent things like this, but we can use a hypergraph—a construct where we generalize edges in graphs that connect pairs of nodes to “hyperedges” that connect any number of nodes:
✕
ResourceFunction["WolframModelPlot"][{{1, 2, 3}, {3, 4, 5}}, VertexLabels -> Automatic] |
(Notice that we’re dealing with directed hypergraphs, where the order in which nodes appear in a hyperedge matters. In the picture, the “membranes” are just indicating which nodes are connected to the same hyperedge.)
We can make rules for hypergraphs too:
{{x, y, z}} → {{w, w, y}, {w, x, z}}
✕
RulePlot[ResourceFunction[ "WolframModel"][{{1, 2, 3}} -> {{4, 4, 2}, {4, 1, 3}}]] |
And now here’s what happens if we run this rule starting from the simplest possible ternary hypergraph—the ternary self-loop {{0,0,0}}:
✕
ResourceFunction[ "WolframModel"][{{1, 2, 3}} -> {{4, 4, 2}, {4, 1, 3}}, {{0, 0, 0}}, 8]["StatesPlotsList", "MaxImageSize" -> 180] |
Alright, so what happens if we just start picking simple rules at random? Here are some of the things they do:
✕
urules24 = Import["https://www.wolframcloud.com/obj/wolframphysics/Data/22-24-\ 2x0-unioned-summary.wxf"]; SeedRandom[6783]; GraphicsGrid[ Partition[ ResourceFunction["WolframModelPlot"][List @@@ EdgeList[#]] & /@ Take[Select[ ParallelMap[ UndirectedGraph[ Rule @@@ ResourceFunction["WolframModel"][#, {{0, 0}, {0, 0}}, 8, "FinalState"], GraphLayout -> "SpringElectricalEmbedding"] &, #Rule & /@ RandomSample[urules24, 150]], EdgeCount[#] > 10 && ConnectedGraphQ[#] &], 60], 10], ImageSize -> Full] |
Somehow this looks very zoological (and, yes, these models are definitely relevant for things other than fundamental physics—though probably particularly molecular-scale construction). But basically what we see here is that there are various common forms of behavior, some simple, and some not.
Here are some samples of the kinds of things we see:
✕
GraphicsGrid[ Partition[ ParallelMap[ ResourceFunction["WolframModel"][#[[1]], #[[2]], #[[3]], "FinalStatePlot"] &, {{{{1, 2}, {1, 3}} -> {{1, 2}, {1, 4}, {2, 4}, {4, 3}}, {{0, 0}, {0, 0}}, 12}, {{{1, 2}, {1, 3}} -> {{1, 4}, {1, 4}, {2, 4}, {3, 2}}, {{0, 0}, {0, 0}}, 10}, {{{1, 2}, {1, 3}} -> {{2, 2}, {2, 4}, {1, 4}, {3, 4}}, {{0, 0}, {0, 0}}, 10}, {{{1, 2}, {1, 3}} -> {{2, 3}, {2, 4}, {3, 4}, {1, 4}}, {{0, 0}, {0, 0}}, 10}, {{{1, 2}, {1, 3}} -> {{2, 3}, {2, 4}, {3, 4}, {4, 1}}, {{0, 0}, {0, 0}}, 12}, {{{1, 2}, {1, 3}} -> {{2, 4}, {2, 1}, {4, 1}, {4, 3}}, {{0, 0}, {0, 0}}, 9}, {{{1, 2}, {1, 3}} -> {{2, 4}, {2, 4}, {1, 4}, {3, 4}}, {{0, 0}, {0, 0}}, 10}, {{{1, 2}, {1, 3}} -> {{2, 4}, {2, 4}, {2, 1}, {3, 4}}, {{0, 0}, {0, 0}}, 10}, {{{1, 2}, {1, 3}} -> {{4, 1}, {1, 4}, {4, 2}, {4, 3}}, {{0, 0}, {0, 0}}, 12}, {{{1, 2}, {2, 3}} -> {{1, 2}, {2, 1}, {4, 1}, {4, 3}}, {{0, 0}, {0, 0}}, 10}, {{{1, 2}, {2, 3}} -> {{1, 3}, {1, 4}, {3, 4}, {3, 2}}, {{0, 0}, {0, 0}}, 10}, {{{1, 2}, {2, 3}} -> {{2, 3}, {2, 4}, {3, 4}, {1, 2}}, {{0, 0}, {0, 0}}, 9}}], 4], ImageSize -> Full] |
And the big question is: if we were to run rules like these long enough, would they end up making something that reproduces our physical universe? Or, put another way, out in this computational universe of simple rules, can we find our physical universe?
A big question, though, is: How would we know? What we’re seeing here are the results of applying rules a few thousand times; in our actual universe they may have been applied 10^{500} times so far, or even more. And it’s not easy to bridge that gap. And we have to work it from both sides. First, we have to use the best summary of the operation of our universe that what we’ve learned in physics over the past few centuries has given us. And second, we have to go as far as we can in figuring out what our rules actually do.
And here there’s potentially a fundamental problem: the phenomenon of computational irreducibility. One of the great achievements of the mathematical sciences, starting about three centuries ago, has been delivering equations and formulas that basically tell you how a system will behave without you having to trace each step in what the system does. But many years ago I realized that in the computational universe of possible rules, this very often isn’t possible. Instead, even if you know the exact rule that a system follows, you may still not be able to work out what the system will do except by essentially just tracing every step it takes.
One might imagine that—once we know the rule for some system—then with all our computers and brainpower we’d always be able to “jump ahead” and work out what the system would do. But actually there’s something I call the Principle of Computational Equivalence, which says that almost any time the behavior of a system isn’t obviously simple, it’s computationally as sophisticated as anything. So we won’t be able to “outcompute” it—and to work out what it does will take an irreducible amount of computational work.
Well, for our models of the universe this is potentially a big problem. Because we won’t be able to get even close to running those models for as long as the universe does. And at the outset it’s not clear that we’ll be able to tell enough from what we can do to see if it matches up with physics.
But the big recent surprise for me is that we seem to be lucking out. We do know that whenever there’s computational irreducibility in a system, there are also an infinite number of pockets of computational reducibility. But it’s completely unclear whether in our case those pockets will line up with things we know from physics. And the surprise is that it seems a bunch of them do.
Let’s look at a particular, simple rule from our infinite collection:
{{x, y, y}, {z, x, u}} → {{y, v, y}, {y, z, v}, {u, v, v}}
✕
RulePlot[ResourceFunction[ "WolframModel"][{{1, 2, 2}, {3, 1, 4}} -> {{2, 5, 2}, {2, 3, 5}, {4, 5, 5}}]] |
Here’s what it does:
✕
ResourceFunction["WolframModelPlot"][#, ImageSize -> 50] & /@ ResourceFunction[ "WolframModel"][{{{1, 2, 2}, {3, 1, 4}} -> {{2, 5, 2}, {2, 3, 5}, {4, 5, 5}}}, {{0, 0, 0}, {0, 0, 0}}, 20, "StatesList"] |
And after a while this is what happens:
✕
Row[Append[ Riffle[ResourceFunction[ "WolframModel"][{{1, 2, 2}, {3, 1, 4}} -> {{2, 5, 2}, {2, 3, 5}, {4, 5, 5}}, {{0, 0, 0}, {0, 0, 0}}, #, "FinalStatePlot"] & /@ {200, 500}, " ... "], " ..."]] |
It’s basically making us a very simple “piece of space”. If we keep on going longer and longer it’ll make a finer and finer mesh, to the point where what we have is almost indistinguishable from a piece of a continuous plane.
Here’s a different rule:
{{x, x, y}, {z, u, x}} → {{u, u, z}, {v, u, v}, {v, y, x}}
✕
RulePlot[ResourceFunction[ "WolframModel"][{{x, x, y}, {z, u, x}} -> {{u, u, z}, {v, u, v}, {v, y, x}}]] |
✕
ResourceFunction["WolframModelPlot"][#, ImageSize -> 50] & /@ ResourceFunction[ "WolframModel"][{{1, 1, 2}, {3, 4, 1}} -> {{4, 4, 3}, {5, 4, 5}, {5, 2, 1}}, {{0, 0, 0}, {0, 0, 0}}, 20, "StatesList"] |
✕
ResourceFunction[ "WolframModel"][{{1, 1, 2}, {3, 4, 1}} -> {{4, 4, 3}, {5, 4, 5}, {5, 2, 1}}, {{0, 0, 0}, {0, 0, 0}}, 2000, "FinalStatePlot"] |
It looks it’s “trying to make” something 3D. Here’s another rule:
{{x, y, z}, {u, y, v}} → {{w, z, x}, {z, w, u}, {x, y, w}}
✕
RulePlot[ResourceFunction[ "WolframModel"][{{1, 2, 3}, {4, 2, 5}} -> {{6, 3, 1}, {3, 6, 4}, {1, 2, 6}}]] |
✕
ResourceFunction["WolframModelPlot"][#, ImageSize -> 50] & /@ ResourceFunction[ "WolframModel"][{{x, y, z}, {u, y, v}} -> {{w, z, x}, {z, w, u}, {x, y, w}}, {{0, 0, 0}, {0, 0, 0}}, 20, "StatesList"] |
✕
ResourceFunction[ "WolframModel"][{{1, 2, 3}, {4, 2, 5}} -> {{6, 3, 1}, {3, 6, 4}, {1, 2, 6}}, {{0, 0, 0}, {0, 0, 0}}, 1000, "FinalStatePlot"] |
Isn’t this strange? We have a rule that’s just specifying how to rewrite pieces of an abstract hypergraph, with no notion of geometry, or anything about 3D space. And yet it produces a hypergraph that’s naturally laid out as something that looks like a 3D surface.
Even though the only thing that’s really here is connections between points, we can “guess” where a surface might be, then we can show the result in 3D:
✕
ResourceFunction["GraphReconstructedSurface"][ ResourceFunction[ "WolframModel"][ {{1, 2, 3}, {4, 2, 5}} -> {{6, 3, 1}, {3, 6, 4}, {1, 2, 6}}, {{0, 0, 0}, {0, 0, 0}}, 2000, "FinalState"]] |
If we keep going, then like the example of the plane, the mesh will get finer and finer, until basically our rule has grown us—point by point, connection by connection—something that’s like a continuous 3D surface of the kind you might study in a calculus class. Of course, in some sense, it’s not “really” that surface: it’s just a hypergraph that represents a bunch of abstract relations—but somehow the pattern of those relations gives it a structure that’s a closer and closer approximation to the surface.
And this is basically how I think space in the universe works. Underneath, it’s a bunch of discrete, abstract relations between abstract points. But at the scale we’re experiencing it, the pattern of relations it has makes it seem like continuous space of the kind we’re used to. It’s a bit like what happens with, say, water. Underneath, it’s a bunch of discrete molecules bouncing around. But to us it seems like a continuous fluid.
Needless to say, people have thought that space might ultimately be discrete ever since antiquity. But in modern physics there was never a way to make it work—and anyway it was much more convenient for it to be continuous, so one could use calculus. But now it’s looking like the idea of space being discrete is actually crucial to getting a fundamental theory of physics.
A very fundamental fact about space as we experience it is that it is three-dimensional. So can our rules reproduce that? Two of the rules we just saw produce what we can easily recognize as two-dimensional surfaces—in one case flat, in the other case arranged in a certain shape. Of course, these are very bland examples of (two-dimensional) space: they are effectively just simple grids. And while this is what makes them easy to recognize, it also means that they’re not actually much like our universe, where there’s in a sense much more going on.
So, OK, take a case like:
✕
ResourceFunction[ "WolframModel"][{{1, 2, 3}, {4, 3, 5}} -> {{3, 5, 2}, {5, 2, 4}, {2, 1, 6}}, {{0, 0, 0}, {0, 0, 0}}, 22, "FinalStatePlot"] |
If we were to go on long enough, would this make something like space, and, if so, with how many dimensions? To know the answer, we have to have some robust way to measure dimension. But remember, the pictures we’re drawing are just visualizations; the underlying structure is a bunch of discrete relations defining a hypergraph—with no information about coordinates, or geometry, or even topology. And, by the way, to emphasize that point, here is the same graph—with exactly the same connectivity structure—rendered four different ways:
✕
GridGraph[{10, 10}, GraphLayout -> #, VertexStyle -> ResourceFunction["WolframPhysicsProjectStyleData"]["SpatialGraph", "VertexStyle"], EdgeStyle -> ResourceFunction["WolframPhysicsProjectStyleData"]["SpatialGraph", "EdgeLineStyle"] ] & /@ {"SpringElectricalEmbedding", "TutteEmbedding", "RadialEmbedding", "DiscreteSpiralEmbedding"} |
But getting back to the question of dimension, recall that the area of a circle is πr^{2}; the volume of a sphere is . In general, the “volume” of the d-dimensional analog of a sphere is a constant multiplied by r^{d}. But now think about our hypergraph. Start at some point in the hypergraph. Then follow r hyperedges in all possible ways. You’ve effectively made the analog of a “spherical ball” in the hypergraph. Here are examples for graphs corresponding to 2D and 3D lattices:
✕
MakeBallPicture[g_, rmax_] := Module[{gg = UndirectedGraph[g], cg}, cg = GraphCenter[gg]; Table[HighlightGraph[gg, NeighborhoodGraph[gg, cg, r]], {r, 0, rmax}]]; Graph[#, ImageSize -> 60, VertexStyle -> ResourceFunction["WolframPhysicsProjectStyleData"]["SpatialGraph", "VertexStyle"], EdgeStyle -> ResourceFunction["WolframPhysicsProjectStyleData"]["SpatialGraph", "EdgeLineStyle"] ] & /@ MakeBallPicture[GridGraph[{11, 11}], 7] |
✕
MakeBallPicture[g_, rmax_] := Module[{gg = UndirectedGraph[g], cg}, cg = GraphCenter[gg]; Table[HighlightGraph[gg, NeighborhoodGraph[gg, cg, r]], {r, 0, rmax}]]; Graph[#, ImageSize -> 80, VertexStyle -> ResourceFunction["WolframPhysicsProjectStyleData"]["SpatialGraph", "VertexStyle"], EdgeStyle -> ResourceFunction["WolframPhysicsProjectStyleData"]["SpatialGraph", "EdgeLineStyle"] ] & /@ MakeBallPicture[GridGraph[{7, 7, 7}], 5] |
And if you now count the number of points reached by going “graph distance r” (i.e. by following r connections in the graph) you’ll find in these two cases that they indeed grow like r^{2} and r^{3}.
So this gives us a way to measure the effective dimension of our hypergraphs. Just start at a particular point and see how many points you reach by going r steps:
✕
gg = UndirectedGraph[ ResourceFunction["HypergraphToGraph"][ ResourceFunction[ "WolframModel"][{{x, y}, {x, z}} -> {{x, z}, {x, w}, {y, w}, {z, w}}, {{1, 2}, {1, 3}}, 11, "FinalState"]]]; With[{cg = GraphCenter[gg]}, Table[HighlightGraph[gg, NeighborhoodGraph[gg, cg, r], ImageSize -> 90], {r, 6}]] |
Now to work out effective dimension, we in principle just have to fit the results to r^{d}. It’s a bit complicated, though, because we need to avoid small r (where every detail of the hypergraph is going to matter) and large r (where we’re hitting the edge of the hypergraph)—and we also need to think about how our “space” is refining as the underlying system evolves. But in the end we can generate a series of fits for the effective dimension—and in this case these say that the effective dimension is about 2.7:
✕
HypergraphDimensionEstimateList[hg_] := ResourceFunction["LogDifferences"][ MeanAround /@ Transpose[ Values[ResourceFunction["HypergraphNeighborhoodVolumes"][hg, All, Automatic]]]]; ListLinePlot[ Select[Length[#] > 3 &][ HypergraphDimensionEstimateList /@ Drop[ResourceFunction[ "WolframModel"][{{x, y}, {x, z}} -> {{x, z}, {x, w}, {y, w}, {z, w}}, {{1, 2}, {1, 3}}, 16, "StatesList"], 4]], Frame -> True, PlotStyle -> {Hue[0.9849884156577183, 0.844661839156126, 0.63801], Hue[0.05, 0.9493847125498949, 0.954757], Hue[ 0.0889039442504032, 0.7504362741954692, 0.873304], Hue[ 0.06, 1., 0.8], Hue[0.12, 1., 0.9], Hue[0.08, 1., 1.], Hue[ 0.98654716551403, 0.6728487861309527, 0.733028], Hue[ 0.04, 0.68, 0.9400000000000001], Hue[ 0.9945149844324427, 0.9892162267509705, 0.823529], Hue[ 0.9908289627180552, 0.4, 0.9]}] |
If we do the same thing for
✕
ResourceFunction[ "WolframModel"][{{1, 2, 2}, {3, 1, 4}} -> {{2, 5, 2}, {2, 3, 5}, {4, 5, 5}}, {{0, 0, 0}, {0, 0, 0}}, 200, "FinalStatePlot"] |
it’s limiting to dimension 2, as it should:
✕
CenteredDimensionEstimateList[g_Graph] := ResourceFunction["LogDifferences"][ N[First[Values[ ResourceFunction["GraphNeighborhoodVolumes"][g, GraphCenter[g]]]]]]; Show[ListLinePlot[ Table[CenteredDimensionEstimateList[ UndirectedGraph[ ResourceFunction["HypergraphToGraph"][ ResourceFunction[ "WolframModel"][{{1, 2, 2}, {3, 1, 4}} -> {{2, 5, 2}, {2, 3, 5}, {4, 5, 5}}, {{0, 0, 0}, {0, 0, 0}}, t, "FinalState"]]]], {t, 500, 2500, 500}], Frame -> True, PlotStyle -> {Hue[0.9849884156577183, 0.844661839156126, 0.63801], Hue[0.05, 0.9493847125498949, 0.954757], Hue[ 0.0889039442504032, 0.7504362741954692, 0.873304], Hue[ 0.06, 1., 0.8], Hue[0.12, 1., 0.9], Hue[0.08, 1., 1.], Hue[ 0.98654716551403, 0.6728487861309527, 0.733028], Hue[ 0.04, 0.68, 0.9400000000000001], Hue[ 0.9945149844324427, 0.9892162267509705, 0.823529], Hue[ 0.9908289627180552, 0.4, 0.9]}], Plot[2, {r, 0, 50}, PlotStyle -> Dotted]] |
What does the fractional dimension mean? Well, consider fractals, which our rules can easily make:
{{x, y, z}} → {{x, u, w}, {y, v, u}, {z, w, v}}
✕
RulePlot[ResourceFunction[ "WolframModel"][{{1, 2, 3}} -> {{1, 4, 6}, {2, 5, 4}, {3, 6, 5}}]] |
✕
ResourceFunction["WolframModelPlot"][#, "MaxImageSize" -> 100] & /@ ResourceFunction[ "WolframModel"][{{1, 2, 3}} -> {{1, 4, 6}, {2, 5, 4}, {3, 6, 5}}, {{0, 0, 0}}, 6, "StatesList"] |
If we measure the dimension here we get 1.58—the usual fractal dimension for a Sierpiński structure:
✕
HypergraphDimensionEstimateList[hg_] := ResourceFunction["LogDifferences"][ MeanAround /@ Transpose[ Values[ResourceFunction["HypergraphNeighborhoodVolumes"][hg, All, Automatic]]]]; Show[ ListLinePlot[ Drop[HypergraphDimensionEstimateList /@ ResourceFunction[ "WolframModel"][{{1, 2, 3}} -> {{1, 4, 6}, {2, 5, 4}, {3, 6, 5}}, {{0, 0, 0}}, 8, "StatesList"], 2], PlotStyle -> {Hue[0.9849884156577183, 0.844661839156126, 0.63801], Hue[0.05, 0.9493847125498949, 0.954757], Hue[ 0.0889039442504032, 0.7504362741954692, 0.873304], Hue[ 0.06, 1., 0.8], Hue[0.12, 1., 0.9], Hue[0.08, 1., 1.], Hue[ 0.98654716551403, 0.6728487861309527, 0.733028], Hue[ 0.04, 0.68, 0.9400000000000001], Hue[ 0.9945149844324427, 0.9892162267509705, 0.823529], Hue[ 0.9908289627180552, 0.4, 0.9]}, Frame -> True, PlotRange -> {0, Automatic}], Plot[Log[2, 3], {r, 0, 150}, PlotStyle -> {Dotted}]] |
Our rule above doesn’t create a structure that’s as regular as this. In fact, even though the rule itself is completely deterministic, the structure it makes looks quite random. But what our measurements suggest is that when we keep running the rule it produces something that’s like 2.7-dimensional space.
Of course, 2.7 is not 3, and presumably this particular rule isn’t the one for our particular universe (though it’s not clear what effective dimension it’d have if we ran it 10^{100} steps). But the process of measuring dimension shows an example of how we can start making “physics-connectable” statements about the behavior of our rules.
By the way, we’ve been talking about “making space” with our models. But actually, we’re not just trying to make space; we’re trying to make everything in the universe. In standard current physics, there’s space—described mathematically as a manifold—and serving as a kind of backdrop, and then there’s everything that’s in space, all the matter and particles and planets and so on.
But in our models there’s in a sense nothing but space—and in a sense everything in the universe must be “made of space”. Or, put another way, it’s the exact same hypergraph that’s giving us the structure of space, and everything that exists in space.
So what this means is that, for example, a particle like an electron or a photon must correspond to some local feature of the hypergraph, a bit like in this toy example:
✕
Graph[EdgeAdd[ EdgeDelete[ NeighborhoodGraph[ IndexGraph@ResourceFunction["HexagonalGridGraph"][{6, 5}], {42, 48, 54, 53, 47, 41}, 4], {30 <-> 29, 42 <-> 41}], {30 <-> 41, 42 <-> 29}], VertexSize -> {Small, Alternatives @@ {30, 36, 42, 41, 35, 29} -> Large}, EdgeStyle -> {ResourceFunction["WolframPhysicsProjectStyleData"][ "SpatialGraph", "EdgeLineStyle"], Alternatives @@ {30 \[UndirectedEdge] 24, 24 \[UndirectedEdge] 18, 18 \[UndirectedEdge] 17, 17 \[UndirectedEdge] 23, 23 \[UndirectedEdge] 29, 29 \[UndirectedEdge] 35, 35 \[UndirectedEdge] 34, 34 \[UndirectedEdge] 40, 40 \[UndirectedEdge] 46, 46 \[UndirectedEdge] 52, 52 \[UndirectedEdge] 58, 58 \[UndirectedEdge] 59, 59 \[UndirectedEdge] 65, 65 \[UndirectedEdge] 66, 66 \[UndirectedEdge] 60, 60 \[UndirectedEdge] 61, 61 \[UndirectedEdge] 55, 55 \[UndirectedEdge] 49, 49 \[UndirectedEdge] 54, 49 \[UndirectedEdge] 43, 43 \[UndirectedEdge] 37, 37 \[UndirectedEdge] 36, 36 \[UndirectedEdge] 30, 30 \[UndirectedEdge] 41, 42 \[UndirectedEdge] 29, 36 \[UndirectedEdge] 42, 35 \[UndirectedEdge] 41, 41 \[UndirectedEdge] 47, 47 \[UndirectedEdge] 53, 53 \[UndirectedEdge] 54, 54 \[UndirectedEdge] 48, 48 \[UndirectedEdge] 42} -> Directive[AbsoluteThickness[2.5], Darker[Red, .2]]}, VertexStyle -> ResourceFunction["WolframPhysicsProjectStyleData"]["SpatialGraph", "VertexStyle"]] |
To give a sense of scale, though, I have an estimate that says that 10^{200} times more “activity” in the hypergraph that represents our universe is going into “maintaining the structure of space” than is going into maintaining all the matter we know exists in the universe.
Here are a few structures that simple examples of our rules make:
✕
GraphicsRow[{WolframModel[ {{1, 2, 2}, {1, 3, 4}} -> {{4, 5, 5}, {5, 3, 2}, {1, 2, 5}}, {{0, 0, 0}, {0, 0, 0}}, 1000, "FinalStatePlot"], WolframModel[{{1, 1, 2}, {1, 3, 4}} -> {{4, 4, 5}, {5, 4, 2}, {3, 2, 5}}, {{0, 0, 0}, {0, 0, 0}}, 1000, "FinalStatePlot"], WolframModel[{{1, 1, 2}, {3, 4, 1}} -> {{3, 3, 5}, {2, 5, 1}, {2, 6, 5}}, {{0, 0, 0}, {0, 0, 0}}, 2000, "FinalStatePlot"]}, ImageSize -> Full] |
But while all of these look like surfaces, they’re all obviously different. And one way to characterize them is by their local curvature. Well, it turns out that in our models, curvature is a concept closely related to dimension—and this fact will actually be critical in understanding, for example, how gravity arises.
But for now, let’s talk about how one would measure curvature on a hypergraph. Normally the area of a circle is πr^{2}. But let’s imagine that we’ve drawn a circle on the surface of a sphere, and now we’re measuring the area on the sphere that’s inside the circle:
✕
cappedSphere[angle_] := Module[{u, v}, With[{spherePoint = {Cos[u] Sin[v], Sin[u] Sin[v], Cos[v]}}, Graphics3D[{First@ ParametricPlot3D[spherePoint, {v, #1, #2}, {u, 0, 2 \[Pi]}, Mesh -> None, ##3] & @@@ {{angle, \[Pi], PlotStyle -> Lighter[Yellow, .5]}, {0, angle, PlotStyle -> Lighter[Red, .3]}}, First@ParametricPlot3D[ spherePoint /. v -> angle, {u, 0, 2 \[Pi]}, PlotStyle -> Darker@Red]}, Boxed -> False, SphericalRegion -> False, Method -> {"ShrinkWrap" -> True}]]]; Show[GraphicsRow[Riffle[cappedSphere /@ {0.3, Pi/6, .8}, Spacer[30]]], ImageSize -> 250] |
This area is no longer π. Instead it’s π, where a is the radius of the sphere. In other words, as the radius of the circle gets bigger, the effect of being on the sphere is ever more important. (On the surface of the Earth, imagine a circle drawn around the North Pole; once it gets to the equator, it can never get any bigger.)
If we generalize to d dimensions, it turns out the formula for the growth rate of the volume is , where R is a mathematical object known as the Ricci scalar curvature.
So what this all means is that if we look at the growth rates of spherical balls in our hypergraphs, we can expect two contributions: a leading one of order r^{d} that corresponds to effective dimension, and a “correction” of order r^{2} that represents curvature.
Here’s an example. Instead of giving a flat estimate of dimension (here equal to 2), we have something that dips down, reflecting the positive (“sphere-like”) curvature of the surface:
✕
res = CloudGet["https://wolfr.am/L1ylk12R"]; GraphicsRow[{ResourceFunction["WolframModelPlot"][ ResourceFunction[ "WolframModel"][{{1, 2, 3}, {4, 2, 5}} -> {{6, 3, 1}, {3, 6, 4}, {1, 2, 6}}, {{0, 0, 0}, {0, 0, 0}}, 800, "FinalState"]], ListLinePlot[res, Frame -> True, PlotStyle -> {Hue[0.9849884156577183, 0.844661839156126, 0.63801], Hue[0.05, 0.9493847125498949, 0.954757], Hue[ 0.0889039442504032, 0.7504362741954692, 0.873304], Hue[ 0.06, 1., 0.8], Hue[0.12, 1., 0.9], Hue[0.08, 1., 1.], Hue[ 0.98654716551403, 0.6728487861309527, 0.733028], Hue[ 0.04, 0.68, 0.9400000000000001], Hue[ 0.9945149844324427, 0.9892162267509705, 0.823529], Hue[ 0.9908289627180552, 0.4, 0.9]}]}] |
What is the significance of curvature? One thing is that it has implications for geodesics. A geodesic is the shortest distance between two points. In ordinary flat space, geodesics are just lines. But when there’s curvature, the geodesics are curved:
✕
(*https://www.wolframcloud.com/obj/wolframphysics/TechPaper-Programs/\ Section-04/Geodesics-01.wl*) CloudGet["https://wolfr.am/L1PH6Rne"]; hyperboloidGeodesics = Table[ Part[ NDSolve[{Sinh[ 2 u[t]] ((2 Derivative[1][u][t]^2 - Derivative[1][v][t]^2)/( 2 Cosh[2 u[t]])) + Derivative[2][u][t] == 0, ((2 Tanh[ u[t]]) Derivative[1][u][t]) Derivative[1][v][t] + Derivative[2][v][ t] == 0, u[0] == -0.9, v[0] == v0, u[1] == 0.9, v[1] == v0}, { u[t], v[t]}, {t, 0, 1}, MaxSteps -> Infinity], 1], {v0, Range[-0.1, 0.1, 0.025]}]; {SphereGeodesics[Range[-.1, .1, .025]], PlaneGeodesics[Range[-.1, .1, .025]], Show[ParametricPlot3D[{Sinh[u], Cosh[u] Sin[v], Cos[v] Cosh[u]}, {u, -1, 1}, {v, -\[Pi]/3, \[Pi]/3}, Mesh -> False, Boxed -> False, Axes -> False, PlotStyle -> color], ParametricPlot3D[{Sinh[u[t]], Cosh[u[t]] Sin[v[t]], Cos[v[t]] Cosh[u[t]]} /. #, {t, 0, 1}, PlotStyle -> Red] & /@ hyperboloidGeodesics, ViewAngle -> 0.3391233203265557`, ViewCenter -> {{0.5`, 0.5`, 0.5`}, {0.5265689095305934`, 0.5477310383268459`}}, ViewPoint -> {1.7628482856617167`, 0.21653966523483362`, 2.8801868854502355`}, ViewVertical -> {-0.1654573174671554`, 0.1564093539158781`, 0.9737350718261054`}]} |
In the case of positive curvature, bundles of geodesics converge; for negative curvature they diverge. But, OK, even though geodesics were originally defined for continuous space (actually, as the name suggests, for paths on the surface of the Earth), one can also have them in graphs (and hypergraphs). And it’s the same story: the geodesic is the shortest path between two points in the graph (or hypergraph).
Here are geodesics on the “positive-curvature surface” created by one of our rules:
✕
findShortestPath[edges_, endpoints : {{_, _} ...}] := FindShortestPath[ Catenate[Partition[#, 2, 1, 1] & /@ edges], #, #2] & @@@ endpoints; pathEdges[edges_, path_] := Select[Count[Alternatives @@ path]@# >= 2 &]@edges; plotGeodesic[edges_, endpoints : {{_, _} ...}, o : OptionsPattern[]] := With[{vertexPaths = findShortestPath[edges, endpoints]}, ResourceFunction["WolframModelPlot"][edges, o, GraphHighlight -> Catenate[vertexPaths], EdgeStyle -> <| Alternatives @@ Catenate[pathEdges[edges, #] & /@ vertexPaths] -> Directive[AbsoluteThickness[4], Red]|>]]; plotGeodesic[edges_, endpoints : {__ : Except@List}, o : OptionsPattern[]] := plotGeodesic[edges, {endpoints}, o]; plotGeodesic[ ResourceFunction[ "WolframModel"][{{1, 2, 3}, {4, 2, 5}} -> {{6, 3, 1}, {3, 6, 4}, {1, 2, 6}}, Automatic, 1000, "FinalState"], {{123, 721}, {24, 552}, {55, 671}}, VertexSize -> 0.12] |
And here they are for a more complicated structure:
✕
(*https://www.wolframcloud.com/obj/wolframphysics/TechPaper-Programs/\ Section-04/Geodesics-01.wl*) CloudGet["https://wolfr.am/L1PH6Rne"];(*Geodesics*) gtest = UndirectedGraph[ Rule @@@ ResourceFunction[ "WolframModel"][{{x, y}, {x, z}} -> {{x, z}, {x, w}, {y, w}, {z, w}}, {{1, 2}, {1, 3}}, 10, "FinalState"], Sequence[ VertexStyle -> ResourceFunction["WolframPhysicsProjectStyleData"][ "SpatialGraph", "VertexStyle"], EdgeStyle -> ResourceFunction["WolframPhysicsProjectStyleData"][ "SpatialGraph", "EdgeLineStyle"]] ]; Geodesics[gtest, #] & /@ {{{79, 207}}, {{143, 258}}} |
Why are geodesics important? One reason is that in Einstein’s general relativity they’re the paths that light (or objects in “free fall”) follows in space. And in that theory gravity is associated with curvature in space. So when something is deflected going around the Sun, that happens because space around the Sun is curved, so the geodesic the object follows is also curved.
General relativity’s description of curvature in space turns out to all be based on the Ricci scalar curvature R that we encountered above (as well as the slightly more sophisticated Ricci tensor). But so if we want to find out if our models are reproducing Einstein’s equations for gravity, we basically have to find out if the Ricci curvatures that arise from our hypergraphs are the same as the theory implies.
There’s quite a bit of mathematical sophistication involved (for example, we have to consider curvature in space+time, not just space), but the bottom line is that, yes, in various limits, and subject to various assumptions, our models do indeed reproduce Einstein’s equations. (At first, we’re just reproducing the vacuum Einstein equations, appropriate when there’s no matter involved; when we discuss matter, we’ll see that we actually get the full Einstein equations.)
It’s a big deal to reproduce Einstein’s equations. Normally in physics, Einstein’s equations are what you start from (or sometimes they arise as a consistency condition for a theory): here they’re what comes out as an emergent feature of the model.
It’s worth saying a little about how the derivation works. It’s actually somewhat analogous to the derivation of the equations of fluid flow from the limit of the underlying dynamics of lots of discrete molecules. But in this case, it’s the structure of space rather than the velocity of a fluid that we’re computing. It involves some of the same kinds of mathematical approximations and assumptions, though. One has to assume, for example, that there’s enough effective randomness generated in the system that statistical averages work. There is also a whole host of subtle mathematical limits to take. Distances have to be large compared to individual hypergraph connections, but small compared to the whole size of the hypergraph, etc.
It’s pretty common for physicists to “hack through” the mathematical niceties. That’s actually happened for nearly a century in the case of deriving fluid equations from molecular dynamics. And we’re definitely guilty of the same thing here. Which in a sense is another way of saying that there’s lots of nice mathematics to do in actually making the derivation rigorous, and understanding exactly when it’ll apply, and so on.
By the way, when it comes to mathematics, even the setup that we have is interesting. Calculus has been built to work in ordinary continuous spaces (manifolds that locally approximate Euclidean space). But what we have here is something different: in the limit of an infinitely large hypergraph, it’s like a continuous space, but ordinary calculus doesn’t work on it (not least because it isn’t necessarily integer-dimensional). So to really talk about it well, we have to invent something that’s kind of a generalization of calculus, that’s for example capable of dealing with curvature in fractional-dimensional space. (Probably the closest current mathematics to this is what’s been coming out of the very active field of geometric group theory.)
It’s worth noting, by the way, that there’s a lot of subtlety in the precise tradeoff between changing the dimension of space, and having curvature in it. And while we think our universe is three-dimensional, it’s quite possible according to our models that there are at least local deviations—and most likely there were actually large deviations in the early universe.
In our models, space is defined by the large-scale structure of the hypergraph that represents our collection of abstract relations. But what then is time?
For the past century or so, it’s been pretty universally assumed in fundamental physics that time is in a sense “just like space”—and that one should for example lump space and time together and talk about the “spacetime continuum”. And certainly the theory of relativity points in this direction. But if there’s been one “wrong turn” in the history of physics in the past century, I think it’s the assumption that space and time are the same kind of thing. And in our models they’re not—even though, as we’ll see, relativity comes out just fine.
So what then is time? In effect it’s much as we experience it: the inexorable process of things happening and leading to other things. But in our models it’s something much more precise: it’s the progressive application of rules, that continually modify the abstract structure that defines the contents of the universe.
The version of time in our models is in a sense very computational. As time progresses we are in effect seeing the results of more and more steps in a computation. And indeed the phenomenon of computational irreducibility implies that there is something definite and irreducible “achieved” by this process. (And, for example, this irreducibility is what I believe is responsible for the “encrypting” of initial conditions that is associated with the law of entropy increase, and the thermodynamic arrow of time.) Needless to say, of course, our modern computational paradigm did not exist a century ago when “spacetime” was introduced, and perhaps if it had, the history of physics might have been very different.
But, OK, so in our models time is just the progressive application of rules. But there is a subtlety in exactly how this works that might at first seem like a detail, but that actually turns out to be huge, and in fact turns out to be the key to both relativity and quantum mechanics.
At the beginning of this piece, I talked about the rule
{{x, y}, {x, z}} → {{x, z}, {x, w}, {y, w}, {z, w}}
✕
RulePlot[ResourceFunction[ "WolframModel"][{{x, y}, {x, z}} -> {{x, z}, {x, w}, {y, w}, {z, w}}], VertexLabels -> Automatic, "RulePartsAspectRatio" -> 0.55] |
and showed the “first few steps” in applying it
✕
ResourceFunction["WolframModelPlot"] /@ ResourceFunction[ "WolframModel"][{{x, y}, {x, z}} -> {{x, z}, {x, w}, {y, w}, {z, w}}, {{1, 2}, {2, 3}, {3, 4}, {2, 4}}, 4, "StatesList"] |
But how exactly did the rule get applied? What is “inside” these steps? The rule defines how to take two connections in the hypergraph (which in this case is actually just a graph) and transform them into four new connections, creating a new element in the process. So each “step” that we showed before actually consists of several individual “updating events” (where here newly added connections are highlighted, and ones that are about to be removed are dashed):
✕
With[{eo = ResourceFunction[ "WolframModel"][{{x, y}, {x, z}} -> {{x, z}, {x, w}, {y, w}, {z, w}}, {{1, 2}, {2, 3}, {3, 4}, {2, 4}}, 4]}, TakeList[eo["EventsStatesPlotsList", ImageSize -> 130], eo["GenerationEventsCountList", "IncludeBoundaryEvents" -> "Initial"]]] |
But now, here is the crucial point: this is not the only sequence of updating events consistent with the rule. The rule just says to find two adjacent connections, and if there are several possible choices, it says nothing about which one. And a crucial idea in our model is in a sense just to do all of them.
We can represent this with a graph that shows all possible paths:
✕
CloudGet["https://wolfr.am/LmHho8Tr"]; (*newgraph*)newgraph[ Graph[ResourceFunction["MultiwaySystem"][ "WolframModel" -> {{{x, y}, {x, z}} -> {{x, z}, {x, w}, {y, w}, {z, w}}}, {{{1, 2}, {2, 3}, {3, 4}, {2, 4}}}, 3, "StatesGraph", VertexSize -> 3, PerformanceGoal -> "Quality"], AspectRatio -> 1/2], {3, 0.7}] |
For the very first update, there are two possibilities. Then for each of the results of these, there are four additional possibilities. But at the next update, something important happens: two of the branches merge. In other words, even though we have done a different sequence of updates, the outcome is the same.
Things rapidly get complicated. Here is the graph after one more update, now no longer trying to show a progression down the page:
✕
Graph[ResourceFunction["MultiwaySystem"][ "WolframModel" -> {{{x, y}, {x, z}} -> {{x, z}, {x, w}, {y, w}, {z, w}}}, {{{1, 2}, {2, 3}, {3, 4}, {2, 4}}}, 4, "StatesGraph", VertexSize -> 3, PerformanceGoal -> "Quality"]] |
So how does this relate to time? What it says is that in the basic statement of the model there is not just one path of time; there are many paths, and many “histories”. But the model—and the rule that is used—determines all of them. And we have seen a hint of something else: that even if we might think we are following an “independent” path of history, it may actually merge with another path.
It will take some more discussion to explain how this all works. But for now let me say that what will emerge is that time is about causal relationships between things, and that in fact, even when the paths of history that are followed are different, these causal relationships can end up being the same—and that in effect, to an observer embedded in the system, there is still just a single thread of time.
In the end it’s wonderfully elegant. But to get to the point where we can understand the elegant bigger picture we need to go through some detailed things. (It isn’t terribly surprising that a fundamental theory of physics—inevitably built on very abstract ideas—is somewhat complicated to explain, but so it goes.)
To keep things tolerably simple, I’m not going to talk directly about rules that operate on hypergraphs. Instead I’m going to talk about rules that operate on strings of characters. (To clarify: these are not the strings of string theory—although in a bizarre twist of “pun-becomes-science” I suspect that the continuum limit of the operations I discuss on character strings is actually related to string theory in the modern physics sense.)
OK, so let’s say we have the rule:
{A → BBB, BB → A}
This rule says that anywhere we see an A, we can replace it with BBB, and anywhere we see BB we can replace it with A. So now we can generate what we call the multiway system for this rule, and draw a “multiway graph” that shows everything that can happen:
✕
ResourceFunction["MultiwaySystem"][{"A" -> "BBB", "BB" -> "A"}, {"A"}, 8, "StatesGraph"] |
At the first step, the only possibility is to use A→BBB to replace the A with BBB. But then there are two possibilities: replace either the first BB or the second BB—and these choices give different results. On the next step, though, all that can be done is to replace the A—in both cases giving BBBB.
So in other words, even though we in a sense had two paths of history that diverged in the multiway system, it took only one step for them to converge again. And if you trace through the picture above you’ll find out that’s what always happens with this rule: every pair of branches that is produced always merges, in this case after just one more step.
This kind of balance between branching and merging is a phenomenon I call “causal invariance”. And while it might seem like a detail here, it actually turns out that it’s at the core of why relativity works, why there’s a meaningful objective reality in quantum mechanics, and a host of other core features of fundamental physics.
But let’s explain why I call the property causal invariance. The picture above just shows what “state” (i.e. what string) leads to what other one. But at the risk of making the picture more complicated (and note that this is incredibly simple compared to the full hypergraph case), we can annotate the multiway graph by including the updating events that lead to each transition between states:
✕
LayeredGraphPlot[ ResourceFunction["MultiwaySystem"][{"A" -> "BBB", "BB" -> "A"}, {"A"}, 8, "EvolutionEventsGraph"], AspectRatio -> 1] |
But now we can ask the question: what are the causal relationships between these events? In other words, what event needs to happen before some other event can happen? Or, said another way, what events must have happened in order to create the input that’s needed for some other event?
Let us go even further, and annotate the graph above by showing all the causal dependencies between events:
✕
LayeredGraphPlot[ ResourceFunction["MultiwaySystem"][{"A" -> "BBB", "BB" -> "A"}, {"A"}, 7, "EvolutionCausalGraph"], AspectRatio -> 1] |
The orange lines in effect show which event has to happen before which other event—or what all the causal relationships in the multiway system are. And, yes, it’s complicated. But note that this picture shows the whole multiway system—with all possible paths of history—as well as the whole network of causal relationships within and between these paths.
But here’s the crucial thing about causal invariance: it implies that actually the graph of causal relationships is the same regardless of which path of history is followed. And that’s why I originally called this property “causal invariance”—because it says that with a rule like this, the causal properties are invariant with respect to different choices of the sequence in which updating is done.
And if one traced through the picture above (and went quite a few more steps), one would find that for every path of history, the causal graph representing causal relationships between events would always be:
✕
ResourceFunction["SubstitutionSystemCausalGraph"][{"A" -> "BBB", "BB" -> "A"}, "A", 10] // LayeredGraphPlot |
or, drawn differently,
✕
ResourceFunction["SubstitutionSystemCausalGraph"][{"A" -> "BBB", "BB" -> "A"}, "A", 12] |
To understand more about causal invariance, it’s useful to look at an even simpler example: the case of the rule BA→AB. This rule says that any time there’s a B followed by an A in a string, swap these characters around. In other words, this is a rule that tries to sort a string into alphabetical order, two characters at a time.
Let’s say we start with BBBAAA. Then here’s the multiway graph that shows all the things that can happen according to the rule:
✕
Graph[ResourceFunction["MultiwaySystem"][{"BA" -> "AB"}, "BBBAAA", 12, "EvolutionEventsGraph"], AspectRatio -> 1.5] // LayeredGraphPlot |
There are lots of different paths that can be followed, depending on which BA in the string the rule is applied to at each step. But the important thing we see is that at the end all the paths merge, and we get a single final result: the sorted string AAABBB. And the fact that we get this single final result is a consequence of the causal invariance of the rule. In a case like this where there’s a final result (as opposed to just evolving forever), causal invariance basically says: it doesn’t matter what order you do all the updates in; the result you’ll get will always be the same.
I’ve introduced causal invariance in the context of trying to find a model of fundamental physics—and I’ve said that it’s going to be critical to both relativity and quantum mechanics. But actually what amounts to causal invariance has been seen before in various different guises in mathematics, mathematical logic and computer science. (Its most common name is “confluence”, though there are some technical differences between this and what I call causal invariance.)
Think about expanding out an algebraic expression, like (x + (1 + x)^{2})(x + 2)^{2}. You could expand one of the powers first, then multiply things out. Or you could multiply the terms first. It doesn’t matter what order you do the steps in; you’ll always get the same canonical form (which in this case Mathematica tells me is 4 + 16x + 17x^{2} + 7x^{3} + x^{4}). And this independence of orders is essentially causal invariance.
Here’s one more example. Imagine you’ve got some recursive definition, say f[n_]:=f[n-1]+f[n-2] (with f[0]=f[1]=1). Now evaluate f[10]. First you get f[9]+f[8]. But what do you do next? Do you evaluate f[9], or f[8]? And then what? In the end, it doesn’t matter; you’ll always get 55. And this is another example of causal invariance.
When one thinks about parallel or asynchronous algorithms, it’s important if one has causal invariance. Because it means one can do things in any order—say, depth-first, breadth-first, or whatever—and one will always get the same answer. And that’s what’s happening in our little sorting algorithm above.
OK, but now let’s come back to causal relationships. Here’s the multiway system for the sorting process annotated with all causal relationships for all paths:
✕
Magnify[LayeredGraphPlot[ ResourceFunction["MultiwaySystem"][{"BA" -> "AB"}, "BBBAAA", 12, "EvolutionCausalGraph"], AspectRatio -> 1.5], .6] |
And, yes, it’s a mess. But because there’s causal invariance, we know something very important: this is basically just a lot of copies of the same causal graph—a simple grid:
✕
centeredRange[n_] := # - Mean@# &@Range@n; centeredLayer[n_] := {#, n} & /@ centeredRange@n; diamondLayerSizes[layers_?OddQ] := Join[#, Reverse@Most@#] &@Range[(layers + 1)/2]; diamondCoordinates[layers_?OddQ] := Catenate@MapIndexed[ Thread@{centeredRange@#, (layers - First@#2)/2} &, diamondLayerSizes[layers]]; diamondGraphLayersCount[graph_] := 2 Sqrt[VertexCount@graph] - 1; With[{graph = ResourceFunction["SubstitutionSystemCausalGraph"][{"BA" -> "AB"}, "BBBBAAAA", 12]}, Graph[graph, VertexCoordinates -> diamondCoordinates@diamondGraphLayersCount@graph, VertexSize -> .2]] |
(By the way—as the picture suggests—the cross-connections between these copies aren’t trivial, and later on we’ll see they’re associated with deep relations between relativity and quantum mechanics, that probably manifest themselves in the physics of black holes. But we’ll get to that later…)
OK, so every different way of applying the sorting rule is supposed to give the same causal graph. So here’s one example of how we might apply the rule starting with a particular initial string:
✕
evo = (SeedRandom[2424]; ResourceFunction[ "SubstitutionSystemCausalEvolution"][{"BA" -> "AB"}, "BBAAAABAABBABBBBBAAA", 15, {"Random", 4}]); ResourceFunction["SubstitutionSystemCausalPlot"][evo, EventLabels -> False, CellLabels -> True, CausalGraph -> False] |
But now let’s show the graph of causal connections. And we see it’s just a grid:
✕
evo = (SeedRandom[2424]; ResourceFunction[ "SubstitutionSystemCausalEvolution"][{"BA" -> "AB"}, "BBAAAABAABBABBBBBAAA", 15, {"Random", 4}]); ResourceFunction["SubstitutionSystemCausalPlot"][evo, EventLabels -> False, CellLabels -> False, CausalGraph -> True] |
Here are three other possible sequences of updates:
✕
SeedRandom[242444]; GraphicsRow[ Table[ResourceFunction["SubstitutionSystemCausalPlot"][ ResourceFunction[ "SubstitutionSystemCausalEvolution"][{"BA" -> "AB"}, "BBAAAABAABBABBBBBAAA", 15, {"Random", 4}], EventLabels -> False, CellLabels -> False, CausalGraph -> True], 3], ImageSize -> Full] |
But now we see causal invariance in action: even though different updates occur at different times, the graph of causal relationships between updating events is always the same. And having seen this—in the context of a very simple example—we’re ready to talk about special relativity.
It’s a typical first instinct in thinking about doing science: you imagine doing an experiment on a system, but you—as the “observer”—are outside the system. Of course if you’re thinking about modeling the whole universe and everything in it, this isn’t ultimately a reasonable way to think about things. Because the “observer” is inevitably part of the universe, and so has to be modeled just like everything else.
In our models what this means is that the “mind of the observer”, just like everything else in the universe, has to get updated through a series of updating events. There’s no absolute way for the observer to “know what’s going on in the universe”; all they ever experience is a series of updating events, that may happen to be affected by updating events occurring elsewhere in the universe. Or, said differently, all the observer can ever observe is the network of causal relationships between events—or the causal graph that we’ve been talking about.
So as toy model let’s look at our BA→AB rule for strings. We might imagine that the string is laid out in space. But to our observer the only thing they know is the causal graph that represents causal relationships between events. And for the BA→AB system here’s one way we can draw that:
✕
CloudGet["https://wolfr.am/KVkTxvC5"]; (*regularCausalGraphPlot*) CloudGet["https://wolfr.am/KVl97Tf4"];(*lorentz*) \ regularCausalGraphPlot[10, {0, 0}, {0.0, 0.0}, lorentz[0]] |
But now let’s think about how observers might “experience” this causal graph. Underneath, an observer is getting updated by some sequence of updating events. But even though that’s “really what’s going on”, to make sense of it, we can imagine our observers setting up internal “mental” models for what they see. And a pretty natural thing for observers like us to do is just to say “one set of things happens all across the universe, then another, and so on”. And we can translate this into saying that we imagine a series of “moments” in time, where things happen “simultaneously” across the universe—at least with some convention for defining what we mean by simultaneously. (And, yes, this part of what we’re doing is basically following what Einstein did when he originally proposed special relativity.)
Here’s a possible way of doing it:
✕
CloudGet["https://wolfr.am/KVkTxvC5"]; (*regularCausalGraphPlot*) CloudGet["https://wolfr.am/KVl97Tf4"];(*lorentz*) \ regularCausalGraphPlot[10, {1, 0}, {0.0, 0.0}, lorentz[0]] |
One can describe this as a “foliation” of the causal graph. We’re dividing the causal graph into leaves or slices. And each slice our observers can consider to be a “successive moment in time”.
It’s important to note that there are some constraints on the foliation we can pick. The causal graph defines what event has to happen before what. And if our observers are going to have a chance of making sense of the world, it had better be the case that their notion of the progress of time aligns with what the causal graph says. So for example this foliation wouldn’t work—because basically it says that the time we assign to events is going to disagree with the order in which the causal graph says they have to happen:
✕
CloudGet["https://wolfr.am/KVkTxvC5"]; (*regularCausalGraphPlot*) CloudGet["https://wolfr.am/KVl97Tf4"];(*lorentz*) \ regularCausalGraphPlot[6, {.2, 0}, {5, 0.0}, lorentz[0]] |
But, so given the foliation above, what actual order of updating events does it imply? It basically just says: as many events as possible happen at the same time (i.e. in the same slice of the foliation), as in this picture:
✕
(*https://www.wolframcloud.com/obj/wolframphysics/TechPaper-Programs/\ Section-08/BoostedEvolution.wl*) CloudGet["https://wolfr.am/LbaDFVSn"]; (*boostedEvolution*) \ ResourceFunction["SubstitutionSystemCausalPlot"][ boostedEvolution[ ResourceFunction[ "SubstitutionSystemCausalEvolution"][{"BA" -> "AB"}, StringRepeat["BA", 10], 10], 0], EventLabels -> False, CellLabels -> True, CausalGraph -> False] |
OK, now let’s connect this to physics. The foliation we had above is relevant to observers who are somehow “stationary with respect to the universe” (the “cosmological rest frame”). One can imagine that as time progresses, the events a particular observer experiences are ones in a column going vertically down the page:
✕
CloudGet["https://wolfr.am/KVkTxvC5"]; (*regularCausalGraphPlot*) CloudGet["https://wolfr.am/KVl97Tf4"];(*lorentz*) \ regularCausalGraphPlot[5, {1, 0.01}, {0.0, 0.0}, {1.5, 0}, {Red, Directive[Dotted, Thick, Red]}, lorentz[0]] |
But now let’s think about an observer who is uniformly moving in space. They’ll experience a different sequence of events, say:
✕
CloudGet["https://wolfr.am/KVkTxvC5"]; (*regularCausalGraphPlot*) CloudGet["https://wolfr.am/KVl97Tf4"];(*lorentz*) \ regularCausalGraphPlot[5, {1, 0.01}, {0.0, 0.3}, {0.6, 0}, {Red, Directive[Dotted, Thick, Red]}, lorentz[0]] |
And that means that the foliation they’ll naturally construct will be different. From the “outside” we can draw it on the causal graph like this:
✕
CloudGet["https://wolfr.am/KVkTxvC5"]; (*regularCausalGraphPlot*) CloudGet["https://wolfr.am/KVl97Tf4"];(*lorentz*) \ regularCausalGraphPlot[10, {1, 0.01}, {0.3, 0.3}, {0, 0}, {Red, Directive[Dotted, Thick, Red]}, lorentz[0.]] |
But to the observer each slice just represents a successive moment of time. And they don’t have any way to know how the causal graph was drawn. So they’ll construct their own version, where the slices are horizontal:
✕
CloudGet["https://wolfr.am/KVkTxvC5"]; (*regularCausalGraphPlot*) CloudGet["https://wolfr.am/KVl97Tf4"];(*lorentz*) \ regularCausalGraphPlot[10, {1, 0.01}, {0.3, 0.3}, {0, 0}, {Red, Directive[Dotted, Thick, Red]}, lorentz[0.3]] |
But now there’s a purely geometrical fact: to make this rearrangement, while preserving the basic structure (and here, angles) of the causal graph, each moment of time has to sample fewer events in the causal graph, by a factor of where β is the angle that represents the velocity of the observer.
If you know about special relativity, you’ll recognize a lot of this. What we’ve been calling foliations correspond directly to relativity’s “reference frames”. And our foliations that represent motion are the standard inertial reference frames of special relativity.
But here’s the special thing that’s going on here: we can interpret all this discussion of foliations and reference frames in terms of the actual rules and evolution of our underlying system. So here now is the evolution of our string-sorting system in the “boosted reference frame” corresponding to an observer going at a certain speed:
✕
(*https://www.wolframcloud.com/obj/wolframphysics/TechPaper-Programs/\ Section-08/BoostedEvolution.wl*) CloudGet["https://wolfr.am/LbaDFVSn"]; (*boostedEvolution*) \ ResourceFunction["SubstitutionSystemCausalPlot"][ boostedEvolution[ ResourceFunction[ "SubstitutionSystemCausalEvolution"][{"BA" -> "AB"}, StringRepeat["BA", 10], 10], 0.3], EventLabels -> False, CellLabels -> True, CausalGraph -> False] |
And here’s the crucial point: because of causal invariance it doesn’t matter that we’re in a different reference frame—the causal graph for the system (and the way it eventually sorts the string) is exactly the same.
In special relativity, the key idea is that the “laws of physics” work the same in all inertial reference frames. But why should that be true? Well, in our systems, there’s an answer: it’s a consequence of causal invariance in the underlying rules. In other words, from the property of causal invariance, we’re able to derive relativity.
Normally in physics one puts in relativity by the way one sets up the mathematical structure of spacetime. But in our models we don’t start from anything like this, and in fact space and time are not even at all the same kind of thing. But what we can now see is that—because of causal invariance—relativity emerges in our models, with all the relationships between space and time that that implies.
So, for example, if we look at the picture of our string-sorting system above, we can see relativistic time dilation. In effect, because of the foliation we picked, time operates slower. Or, said another way, in the effort to sample space faster, our observer experiences slower updating of the system in time.
The speed of light c in our toy system is defined by the maximum rate at which information can propagate, which is determined by the rule, and in the case of this rule is one character per step. And in terms of this, we can then say that our foliation corresponds to a speed 0.3 c. But now we can look at the amount of time dilation, and it’s exactly the amount that relativity says it should be.
By the way, if we imagine trying to make our observer go “faster than light”, we can see that can’t work. Because there’s no way to tip the foliation at more than 45° in our picture, and still maintain the causal relationships implied by the causal graph.
OK, so in our toy model we can derive special relativity. But here’s the thing: this derivation isn’t specific to the toy model; it applies to any rule that has causal invariance. So even though we may be dealing with hypergraphs, not strings, and we may have a rule that shows all kinds of complicated behavior, if it ultimately has causal invariance, then (with various technical caveats, mostly about possible wildness in the causal graph) it will exhibit relativistic invariance, and a physics based on it will follow special relativity.
In our model, everything in the universe—space, matter, whatever—is supposed to be represented by features of our evolving hypergraph. So within that hypergraph, is there a way to identify things that are familiar from current physics, like mass, or energy?
I have to say that although it’s a widespread concept in current physics, I’d never thought of energy as something fundamental. I’d just thought of it as an attribute that things (atoms, photons, whatever) can have. I never really thought of it as something that one could identify abstractly in the very structure of the universe.
So it came as a big surprise when we recently realized that actually in our model, there is something we can point to, and say “that’s energy!”, independent of what it’s the energy of. The technical statement is: energy corresponds to the flux of causal edges through spacelike hypersurfaces. And, by the way, momentum corresponds to the flux of causal edges through timelike hypersurfaces.
OK, so what does this mean? First, what’s a spacelike hypersurface? It’s actually a standard concept in general relativity, for which there’s a direct analogy in our models. Basically it’s what forms a slice in our foliation. Why is it called what it’s called? We can identify two kinds of directions: spacelike and timelike.
A spacelike direction is one that involves just moving in space—and it’s a direction where one can always reverse and go back. A timelike direction is one that involves also progressing through time—where one can’t go back. We can mark spacelike () and timelike () hypersurfaces in the causal graph for our toy model:
✕
CloudGet["https://wolfr.am/KVkTxvC5"]; (*regularCausalGraphPlot*) CloudGet["https://wolfr.am/KVl97Tf4"];(*lorentz*) \ regularCausalGraphPlot[10, {1, 0.5}, {0., 0.}, {-0.5, 0}, {Red, Directive[Dashed, Red]}, lorentz[0.]] |
(They might be called “surfaces”, except that “surfaces” are usually thought of as 2-dimensional, and our 3-space + 1-time dimensional universe, these foliation slices are 3-dimensional: hence the term “hypersurfaces”.)
OK, now let’s look at the picture. The “causal edges” are the causal connections between events, shown in the picture as lines joining the events. So when we talk about a “flux of causal edges through spacelike hypersurfaces”, what we’re talking about is the net number of causal edges that go down through the horizontal slices in the pictures.
In the toy model that’s trivial to see. But here’s a causal graph from a simple hypergraph model, where it’s already considerably more complicated:
✕
Graph[ResourceFunction[ "WolframModel"][ {{x, y}, {z, y}} -> {{x, z}, {y, z}, {w, z}}, {{0, 0}, {0, 0}}, 15, "LayeredCausalGraph"], AspectRatio -> 1/2] |
(Our toy-model causal graph starts from a line of events because we set up a long string as the initial condition; this starts from a single event because it’s starting from a minimal initial condition.)
But when we put a foliation on this causal graph (thereby effectively defining our reference frame) we can start counting how many causal edges go down through successive (“spacelike”) slices:
✕
foliationLines[{lineDensityHorizontal_ : 1, lineDensityVertical_ : 1}, {tanHorizontal_ : 0.0, tanVertical_ : 0.0}, offset : {_, _} : {0, 0}, lineStyles : {_, _} : {Red, Red}, transform_ : (# &)] := {If[lineDensityHorizontal != 0, Style[Table[ Line[transform /@ {{-100 + First@offset, k - 100 tanHorizontal + Last@offset}, {100 + First@offset, k + 100 tanHorizontal + Last@offset}}], {k, -100.5, 100.5, 1/lineDensityHorizontal}], First@lineStyles], {}], If[lineDensityVertical != 0, Style[Table[ Line[transform /@ {{k - 100 tanVertical + First@offset, -100 + Last@offset}, {k + 100 tanVertical + First@offset, 100 + Last@offset}}], {k, -100.5, 100.5, 1/lineDensityVertical}], Last@lineStyles], {}]}; ResourceFunction[ "WolframModel"][{{x, y}, {z, y}} -> {{x, z}, {y, z}, {w, z}}, {{0, 0}, {0, 0}}, 15]["LayeredCausalGraph", AspectRatio -> 1/2, Epilog -> foliationLines[{0.44, 0}, {0, 0}, {0, -0.5}, {Directive[Red, Opacity[0.2]], Red}]] |
We can also ask how many causal edges go “sideways”, through timelike hypersurfaces:
✕
foliationLines[{lineDensityHorizontal_ : 1, lineDensityVertical_ : 1}, {tanHorizontal_ : 0.0, tanVertical_ : 0.0}, offset : {_, _} : {0, 0}, lineStyles : {_, _} : {Red, Red}, transform_ : (# &)] := {If[lineDensityHorizontal != 0, Style[Table[ Line[transform /@ {{-100 + First@offset, k - 100 tanHorizontal + Last@offset}, {100 + First@offset, k + 100 tanHorizontal + Last@offset}}], {k, -100.5, 100.5, 1/lineDensityHorizontal}], First@lineStyles], {}], If[lineDensityVertical != 0, Style[Table[ Line[transform /@ {{k - 100 tanVertical + First@offset, -100 + Last@offset}, {k + 100 tanVertical + First@offset, 100 + Last@offset}}], {k, -100.5, 100.5, 1/lineDensityVertical}], Last@lineStyles], {}]}; ResourceFunction[ "WolframModel"][{{x, y}, {z, y}} -> {{x, z}, {y, z}, {w, z}}, {{0, 0}, {0, 0}}, 15]["LayeredCausalGraph", AspectRatio -> 1/2, Epilog -> foliationLines[{0, 1/3}, {0, 0}, {2.1, 0}, {Directive[Red, Opacity[0.5]], Directive[Dotted, Opacity[0.7], Red]}]] |
OK, so why do we think these fluxes of edges correspond to energy and momentum? Imagine what happens if we change our foliation, say tipping it to correspond to motion at some velocity, as we did in the previous section. It takes a little bit of math, but what we find out is that our fluxes of causal edges transform with velocity basically just like we saw distance and time transform in the previous section.
In the standard derivation of relativistic mechanics, there’s a consistency argument that energy has to transform with velocity like time does, and momentum like distance. But now we actually have a structural reason for this to be the case. It’s a fundamental consequence of our whole setup, and of causal invariance. In traditional physics, one often says that position is the conjugate variable to momentum, and energy to time. And that’s something that’s burnt into the mathematical structure of the theory. But here it’s not something we’re burning in; it’s something we’re deriving from the underlying structure of our model.
And that means there’s ultimately a lot more we can say about it. For example, we might wonder what the “zero of energy” is. After all, if we look at one of our causal graphs, a lot of the causal edges are really just going into “maintaining the structure of space”. So if in a sense space is uniform, there’s inevitably a uniform “background flux” of causal edges associated with that. And whatever we consider to be “energy” corresponds to the fluctuations of that flux around its background value.
By the way, it’s worth mentioning what a “flux of causal edges” corresponds to. Each causal edge represents a causal connection between events, that is in a sense “carried” by some element in the underlying hypergraph (the “spatial hypergraph”). So a “flux of causal edges” is in effect the communication of activity (i.e. events), either in time (i.e. through spacelike hypersurfaces) or in space (i.e. through timelike hypersurfaces). And at least in some approximation we can then say that energy is associated with activity in the hypergraph that propagates information through time, while momentum is associated with activity that propagates information in space.
There’s a fundamental feature of our causal graphs that we haven’t mentioned yet—that’s related to information propagation. Start at any point (i.e. any event) in a causal graph. Then trace the causal connections from that event. You’ll get some kind of cone (here just in 2D):
✕
CloudGet["https://wolfr.am/KVl97Tf4"];(*lorentz*) foliationLines[{lineDensityHorizontal_ : 1, lineDensityVertical_ : 1}, {tanHorizontal_ : 0.0, tanVertical_ : 0.0}, offset : {_, _} : {0, 0}, lineStyles : {_, _} : {Red, Red}, transform_ : (# &)] := {If[lineDensityHorizontal != 0, Style[Table[ Line[transform /@ {{-100 + First@offset, k - 100 tanHorizontal + Last@offset}, {100 + First@offset, k + 100 tanHorizontal + Last@offset}}], {k, -100.5, 100.5, 1/lineDensityHorizontal}], First@lineStyles], {}], If[lineDensityVertical != 0, Style[Table[ Line[transform /@ {{k - 100 tanVertical + First@offset, -100 + Last@offset}, {k + 100 tanVertical + First@offset, 100 + Last@offset}}], {k, -100.5, 100.5, 1/lineDensityVertical}], Last@lineStyles], {}]}; squareCausalGraphPlot[ layerCount_ : 9, {lineDensityHorizontal_ : 1, lineDensityVertical_ : 1}, {tanHorizontal_ : 0.0, tanVertical_ : 0.0}, offset : {_, _} : {0, 0}, lineStyles : {_, _} : {Red, Red}, transform_ : (# &)] := NeighborhoodGraph[ DirectedGraph[ Flatten[Table[{v[{i + 1, j}] -> v[{i, j}], v[{i + 1, j + 1}] -> v[{i, j}]}, {i, layerCount - 1}, {j, 1 + Round[-layerCount/2 + i/2], (layerCount + i)/2}]], VertexCoordinates -> Catenate[ Table[v[{i, j}] -> transform[{2 (#2 - #1/2), #1} & @@ {i, j}], {i, layerCount + 1}, {j, 1 + Round[-layerCount/2 + i/2] - 1, (layerCount + i)/2 + 1}]], VertexSize -> .33, VertexStyle -> Directive[Directive[Opacity[.7], Hue[0.14, 0.34, 1.]], EdgeForm[Directive[Opacity[0.4], Hue[0.09, 1., 0.91]]]], VertexShapeFunction -> "Rectangle", Epilog -> foliationLines[{lineDensityHorizontal, lineDensityVertical}, {tanHorizontal, tanVertical}, offset, lineStyles, transform]], v[{1, 1}], 9]; With[{graph = squareCausalGraphPlot[ 10, {0, 0}, {0., 0.}, {-0.5, 0}, {Red, Directive[Dotted, Red]}, lorentz[0.]]}, Graph[graph, VertexStyle -> {Directive[ Directive[Opacity[.7], Hue[0.14, 0.34, 1.]], EdgeForm[Directive[Opacity[0.4], Hue[0.09, 1., 0.91]]]], Alternatives @@ VertexOutComponent[graph, v[{9, 5}]] -> Directive[Directive[Opacity[.6], Hue[0, 0.45, 0.87]], EdgeForm[ Hue[0, 1, 0.48]]]}]] |
The cone is more complicated in a more complicated causal graph. But you’ll always have something like it. And what it corresponds to physically is what’s normally called a light cone (or “forward light cone”). Assuming we’ve drawn our causal network so that events are somehow laid out in space across the page, then the light cone will show how information (as transmitted by light) can spread in space with time.
When the causal graph gets complicated, the whole setup with light cones gets complicated, as we’ll discuss for example in connection with black holes later. But for now, we can just say there are cones in our causal graph, and in effect the angle of these cones represents the maximum rate of information propagation in the system, which we can identify with the physical speed of light.
And in fact, not only can we identify light cones in our causal graph: in some sense we can think of our whole causal graph as just being a large number of “elementary light cones” all knitted together. And, as we mentioned, much of the structure that’s built necessarily goes into, in effect, “maintaining the structure of space”.
But let’s look more closely at our light cones. There are causal edges on their boundaries that in effect correspond to propagation at the speed of light—and that, in terms of the underlying hypergraph, correspond to events that “reach out” in the hypergraph, and “entrain” new elements as quickly as possible. But what about causal edges that are “more vertical”? These causal edges are associated with events that in a sense reuse elements in the hypergraph, without involving new ones.
And it looks like these causal edges have an important interpretation: they are associated with mass (or, more specifically, rest mass). OK, so the total flux of causal edges through spacelike hypersurfaces corresponds to energy. And now we’re saying that the flux of causal edges specifically in the timelike direction corresponds to rest mass. We can see what happens if we “tip our reference” frames just a bit, say corresponding to a velocity v ≪ c. Again, there’s a small amount of math, but it’s pretty easy to derive formulas for momentum (p) and energy (E). The speed of light c comes into the formulas because it defines the ratio of “horizontal” (i.e. spacelike) to “vertical” (i.e timelike) distances on the causal graph. And for v small compared to c we get:
So from these formulas we can see that just by thinking about causal graphs (and, yes, with a backdrop of causal invariance, and a whole host of detailed mathematical limit questions that we’re not discussing here), we’ve managed to derive a basic (and famous) fact about the relation between energy and mass:
Sometimes in the standard formalism of physics, this relation by now seems more like a definition than something to derive. But in our model, it’s not just a definition, and in fact we can successfully derive it.
Earlier on, we talked about how curvature of space can arise in our models. But at that point we were just talking about “empty space”. Now we can go back and also talk about how curvature interacts with mass and energy in space.
In our earlier discussion, we talked about constructing spherical balls by starting at some point in the hypergraph, and then following all possible sequences of r connections. But now we can do something directly analogous in the causal graph: start at some point, and follow possible sequences of t connections. There’s quite a bit of mathematical trickiness, but essentially this gets us “volumes of light cones”.
If space is effectively d-dimensional, then to a first approximation this volume will grow like . But like in the spatial case, there’s a correction term, this time proportional to the so-called Ricci tensor . (The actual expression is roughly where the are timelike vectors, etc.)
OK, but we also know something else about what is supposed to be inside our light cones: not only are there “background connections” that maintain the structure of space, there are also “additional” causal edges that are associated with energy, momentum and mass. And in the limit of a large causal graph, we can identify the density of these with the so-called energy-momentum tensor . So in the end we have two contributions to the “volumes” of our light cones: one from “pure curvature” and one from energy-momentum.
Again, there’s some math involved. But the main thing is to think about the limit when we’re looking at a very large causal graph. What needs to be true for us to have d-dimensional space, as opposed to something much wilder? This puts a constraint on the growth rates of our light cone volumes, and when one works everything out, it implies that the following equation must hold:
But this is exactly Einstein’s equation for the curvature of space when matter with a certain energy-momentum is present. We’re glossing over lots of details here. But it’s still, in my view, quite spectacular: from the basic structure of our very simple models, we’re able to derive a fundamental result in physics: the equation that for more than a hundred years has passed every test in describing the operation of gravity.
There’s a footnote here. The equation we’ve just given is without a so-called cosmological term. And how that works is bound up with the question of what the zero of energy is, which in our model relates to what features of the evolving hypergraph just have to do with the “maintenance of space”, and what have to do with “things in space” (like matter).
In existing physics, there’s an expectation that even in the “vacuum” there’s actually a formally infinite density of pairs of virtual particles associated with quantum mechanics. Essentially what’s happening is that there are always pairs of particles and antiparticles being created, that annihilate quickly, but that in aggregate contribute a huge effective energy density. We’ll discuss how this relates to quantum mechanics in our models later. But for now let’s just recall that particles (like electrons) in our models basically correspond to locally stable structures in the hypergraph.
But when we think about how “space is maintained” it’s basically through all sorts of seemingly random updating events in the hypergraph. But in existing physics (or, specifically, quantum field theory) we’re basically expected to analyze everything in terms of (virtual) particles. So if we try to do that with all these random updating events, it’s not surprising that we end up saying that there are these infinite collections of things going on. (Yes, this can be made much more precise; I’m just giving an outline here.)
But as soon as we say this, there is an immediate problem: we’re saying that there’s a formally infinite—or at least huge—energy density that must exist everywhere in the universe. But if we then apply Einstein’s equation, we’ll conclude that this must produce enough curvature to basically curl the universe up into a tiny ball.
One way to get out of this is to introduce a so-called cosmological term, that’s just an extra term in the Einstein equations, and then posit that this term is sized so as to exactly cancel (yes, to perhaps one part in 10^{60} or more) the energy density from virtual particles. It’s certainly not a pretty solution.
But in our models, the situation is quite different. It’s not that we have virtual particles “in space”, that are having an effect on space. It’s that the same stuff that corresponds to the virtual particles is actually “making the space”, and maintaining its structure. Of course, there are lots of details about this—which no doubt depend on the particular underlying rule. But the point is that there’s no longer a huge mystery about why “vacuum energy” doesn’t basically destroy our universe: in effect, it’s because it’s what’s making our universe.
One of the big predictions of general relativity is the existence of black holes. So how do things like that work in our models? Actually, it’s rather straightforward. The defining feature of a black hole is the existence of an event horizon: a boundary that light signals can’t cross, and where in effect causal connection is broken.
In our models, we can explicitly see that happen in the causal graph. Here’s an example:
✕
ResourceFunction[ "WolframModel"][{{0, 1}, {0, 2}, {0, 3}} -> {{1, 2}, {3, 2}, {3, 4}, {4, 3}, {4, 4}}, {{0, 0}, {0, 0}, {0, 0}}, 20, "CausalGraph"] // LayeredGraphPlot |
At the beginning, everything is causally connected. But at some point the causal graph splits—and there’s an event horizon. Events happening on one side can’t influence ones on the other, and so on. And that’s how a region of the universe can “causally break off” to form something like a black hole.
But actually, in our models, the “breaking off” can be even more extreme. Not only can the causal graph split; the spatial hypergraph can actually throw off disconnected pieces—each of which in effect forms a whole “separate universe”:
✕
Framed[ResourceFunction["WolframModelPlot"][#, ImageSize -> {UpTo[100], UpTo[60]}], FrameStyle -> LightGray] & /@ ResourceFunction[ "WolframModel"][{{1, 2, 3}, {4, 5, 3}} -> {{2, 6, 4}, {6, 1, 2}, {4, 2, 1}}, {{0, 0, 0}, {0, 0, 0}}, 20, "StatesList"] |
By the way, it’s interesting to look at what happens to the foliations observers make when there’s an event horizon. Causal invariance says that paths in the causal graph that diverge should always eventually merge. But if the paths go into different disconnected pieces of the causal graph, that can’t ever happen. So how does an observer deal with that? Well, basically they have to “freeze time”. They have to have a foliation where successive time slices just pile up, and never enter the disconnected pieces.
It’s just like what happens in general relativity. To an observer far from the black hole, it’ll seem to take an infinite time for anything to fall into the black hole. For now, this is just a phenomenon associated with the structure of space. But later we’ll see that it’s also the direct analog of something completely different: the process of measurement in quantum mechanics.
Coming back to gravity: we can ask questions not only about event horizons, but also about actual singularities in spacetime. In our models, these are places where lots of paths in a causal graph converge to a single point. And in our models, we can immediately study questions like whether there’s always an event horizon associated with any singularity (the “cosmic censorship hypothesis”).
We can ask about other strange phenomena from general relativity. For example, there are closed timelike curves, sometimes viewed as allowing time travel. In our models, closed timelike curves are inconsistent with causal invariance. But we can certainly invent rules that produce them. Here’s an example:
✕
Graph[ResourceFunction["MultiwaySystem"][{"AB" -> "BAB", "BA" -> "A"}, "ABA", 4, "StatesGraph"], GraphLayout -> {"LayeredDigraphEmbedding", "RootVertex" -> "ABA"}] |
We start from one “initial” state in this multiway system. But as we go forward we can enter a loop where we repeatedly visit the same state. And this loop also occurs in the causal graph. We think we’re “going forward in time”. But actually we’re just in a loop, repeatedly returning to the same state. And if we tried to make a foliation where we could describe time as always advancing, we just wouldn’t be able to do it.
In our model, the universe can start as a tiny hypergraph—perhaps a single self-loop. But then—as the rule gets applied—it progressively expands. With some particularly simple rules, the total size of the hypergraph has to just uniformly increase; with others it can fluctuate.
But even if the size of the hypergraph is always increasing, that doesn’t mean we’d necessarily notice. It could be that essentially everything we can see just expands too—so in effect the granularity of space is just getting finer and finer. This would be an interesting resolution to the age-old debate about whether the universe is discrete or continuous. Yes, it’s structurally discrete, but the scale of discreteness relative to our scale is always getting smaller and smaller. And if this happens fast enough, we’d never be able to “see the discreteness”—because every time we tried to measure it, the universe would effectively have subdivided before we got the result. (Somehow it’d be like the ultimate calculus epsilon-delta proof: you challenge the universe with an epsilon, and before you can get the result, the universe has made a smaller delta.)
There are some other strange possibilities too. Like that the whole hypergraph for the universe is always expanding, but pieces are continually “breaking off”, effectively forming black holes of different sizes, and allowing the “main component” of the universe to vary in size.
But regardless of how this kind of expansion works in our universe today, it’s clear that if the universe started with a single self-loop, it had to do a lot of expanding, at least early on. And here there’s an interesting possibility that’s relevant for understanding cosmology.
Just because our current universe exhibits three-dimensional space, in our models there’s no reason to think that the early universe necessarily also did. There are very different things that can happen in our models:
✕
ResourceFunction["WolframModel"][#1, #2, #3, "FinalStatePlot"] & @@@ {{{{1, 2, 3}, {4, 5, 6}, {2, 6}} -> {{7, 7, 2}, {6, 2, 8}, {8, 5, 7}, {8, 9, 3}, {1, 6}, {10, 6}, {5, 3}, {7, 11}}, {{0, 0, 0}, {0, 0, 0}, {0, 0}}, 16}, {{{1, 2, 3}, {1, 4, 5}, {3, 6}} -> {{7, 8, 7}, {7, 5, 6}, {9, 5, 5}, {1, 7, 4}, {7, 5}, {5, 10}, {11, 6}, {6, 9}}, {{0, 0, 0}, {0, 0, 0}, {0, 0}}, 100}, {{{1, 2, 3}, {3, 4}} -> {{5, 5, 5}, {5, 6, 4}, {3, 1}, {1, 5}}, {{0, 0, 0}, {0, 0}}, 16}} |
In the first example here, different parts of space effectively separate into non-communicating “black hole” tree branches. In the second example, we have something like ordinary—in this case 2-dimensional—space. But in the third example, space is in a sense very connected. If we work out the volume of a spherical ball, it won’t grow like r^{d}; it’ll grow exponentially with r (e.g. like 2^{r}).
If we look at the causal graph, we’ll see that you can effectively “go everywhere in space”, or affect every event, very quickly. It’d be as if the speed of light is infinite. But really it’s because space is effectively infinite dimensional.
In typical cosmology, it’s been quite mysterious how different parts of the early universe managed to “communicate” with each other, for example, to smooth out perturbations. But if the universe starts effectively infinite-dimensional, and only later “relaxes” to being finite-dimensional, that’s no longer a mystery.
So, OK, what might we see in the universe today that would reflect what happened extremely early in its history? The fact that our models deterministically generate behavior that seems for all practical purposes random means that we can expect that most features of the initial conditions or very early stages of the universe will quickly be “encrypted”, and effectively not reconstructable.
But it’s just conceivable that something like a breaking of symmetry associated with the first few hypergraphs might somehow survive. And that suggests the bizarre possibility that—just maybe—something like the angular structure of the cosmic microwave background or the very large-scale distribution of galaxies might reflect the discrete structure of the very early universe. Or, in other words, it’s just conceivable that what amounts to the rule for the universe is, in effect, painted across the whole sky. I think this is extremely unlikely, but it’d certainly be an amazing thing if the universe were “self-documenting” that way.
We’ve talked several times about particles like electrons. In current physics theories, the various (truly) elementary particles—the quarks, the leptons (electron, muon, neutrinos, etc.), the gauge bosons, the Higgs—are all assumed to intrinsically be point particles, of zero size. In our models, that’s not how it works. The particles are all effectively “little lumps of space” that have various special properties.
My guess is that the precise list of what particles exist will be something that’s specific to a particular underlying rule. In cellular automata, for example, we’re used to seeing complicated sets of possible localized structures arise:
✕
SeedRandom[2525]; ArrayPlot[ CellularAutomaton[110, RandomInteger[1, 700], 500], ImageSize -> Full, Frame -> None] |
In our hypergraphs, the picture will inevitably be somewhat different. The “core feature” of each particle will be some kind of locally stable structure in the hypergraph (a simple analogy might be that it’s a lump of nonplanarity in an otherwise planar graph). But then there’ll be lots of causal edges associated with the particle, defining its particular energy and momentum.
Still, the “core feature” of the particles will presumably define things like their charge, quantum numbers, and perhaps spin—and the fact that these things are observed to occur in discrete units may reflect the fact that it’s a small piece of hypergraph that’s involved in defining them.
It’s not easy to know what the actual scale of discreteness in space might be in our models. But a possible (though potentially unreliable) estimate might be that the “elementary length” is around 10^{–93} meters. (Note that that’s very small compared to the Planck length ~10^{–35} meters that arises essentially from dimensional analysis.) And with this elementary length, the radius of the electron might be 10^{–81} meters. Tiny, but not zero. (Note that current experiments only tell us that the size of the electron is less than about 10^{–22} meters.)
One feature of our models is that there should be a “quantum of mass”—a discrete amount that all masses, for example of particles, are multiples of. With our estimate for the elementary length, this quantum of mass would be small, perhaps 10^{–30}, or 10^{36} times smaller than the mass of the electron.
And this raises an intriguing possibility. Perhaps the particles—like electrons—that we currently know about are the “big ones”. (With our estimates, an electron would have hypergraph elements in it.) And maybe there are some much smaller, and much lighter ones. At least relative to the particles we currently know, such particles would have few hypergraph elements in them—so I’m referring to them as “oligons” (after the Greek word ὀλιγος for “few”).
What properties would these oligons have? They’d probably interact very very weakly with other things in the universe. Most likely lots of oligons would have been produced in the very early universe, but with their very weak interactions, they’d soon “drop out of thermal equilibrium”, and be left in large numbers as relics—with energies that become progressively lower as the universe expands around them.
So where might oligons be now? Even though their other interactions would likely be exceptionally weak, they’d still be subject to gravity. And if their energies end up being low enough, they’d basically collect in gravity wells around the universe—which means in and around galaxies.
And that’s interesting—because right now there’s quite a mystery about the amount of mass seen in galaxies. There appears to be a lot of “dark matter” that we can’t see but that has gravitational effects. Well, maybe it’s oligons. Maybe even lots of different kinds of oligons: a whole shadow physics of much lighter particles.
“But how will you ever get quantum mechanics?”, physicists would always ask me when I would describe earlier versions of my models. In many ways, quantum mechanics is the pinnacle of existing physics. It’s always had a certain “you-are-not-expected-to-understand-this” air, though, coupled with “just-trust-the-mathematical-formalism”. And, yes, the mathematical formalism has worked well—really well—in letting us calculate things. (And it almost seems more satisfying because the calculations are often so hard; indeed, hard enough that they’re what first made me start using computers to do mathematics 45 years ago.)
Our usual impression of the world is that definite things happen. And before quantum mechanics, classical physics typically captured this in laws—usually equations—that would tell one what specifically a system would do. But in quantum mechanics the formalism involves any particular system doing lots of different things “in parallel”, with us just seeing samples—ultimately with certain probabilities—of these possibilities.
And as soon as one hears of a model in which there are definite rules, one might assume that it could never reproduce quantum mechanics. But, actually, in our models, quantum mechanics is not just possible; it’s absolutely inevitable. And, as we’ll see, in something I consider quite beautiful, the core of what leads to it turns out to be the same as what leads to relativity.
OK, so how does this work? Let’s go back to what we discussed when we first started talking about time. In our models there’s a definite rule for updates to make in our hypergraphs, say:
✕
RulePlot[ResourceFunction[ "WolframModel"][{{x, y}, {x, z}} -> {{y, z}, {y, w}, {z, w}, {x, w}}], VertexLabels -> Automatic, "RulePartsAspectRatio" -> 0.6] |
But if we’ve got a hypergraph like this:
✕
ResourceFunction[ "WolframModel"][{{x, y}, {x, z}} -> {{y, z}, {y, w}, {z, w}, {x, w}}, {{0, 0}, {0, 0}}, 6, "FinalStatePlot"] |
there will usually be many places where this rule can be applied. So which update should we do first? The model doesn’t tell us. But let’s just imagine all the possibilities. The rule tells us what they all are—and we can represent them (as we discussed above) as a multiway system—here illustrated using the simpler case of strings rather than hypergraphs:
✕
ResourceFunction["MultiwaySystem"][{"A" -> "AB", "B" -> "A"}, {"A"}, 6, "StatesGraph"] |
Each node in this graph now represents a complete state of our system (a hypergraph in our actual models). And each node is joined by arrows to the state or states that one gets by applying a single update to it.
If our model had been operating “like classical physics” we would expect it to progress in time from one state to another, say like this:
✕
ResourceFunction["GenerationalMultiwaySystem"][{"A" -> "AB", "B" -> "A"}, {"A"}, 5, "StatesGraph"] |
But the crucial point is that the structure of our models leaves us no choice but to consider multiway systems. The form of the whole multiway system is completely determined by the rules. But—in a way that is already quite reminiscent of the standard formalism of quantum mechanics—the multiway system defines many different possible paths of history.
But now there is a mystery. If there are always all these different possible paths of history, how is it that we ever think that definite things happen in the world? This has been a core mystery of quantum mechanics for a century. It turns out that if one’s just using quantum mechanics to do calculations, the answer basically doesn’t matter. But if one wants to “really understand what’s going on” in quantum mechanics, it’s something that definitely does matter.
And the exciting thing is that in our models, there’s an obvious resolution. And actually it’s based on the exact same phenomenon—causal invariance—that gives us relativity.
Here’s roughly how this works. The key point is to think about what an observer who is themselves part of the multiway system will conclude about the world. Yes, there are different possible paths of history. But—just as in our discussion of relativity—the only aspect of them that an observer will ever be aware of is the causal relationships between the events they involve. But the point is that—even though when looked at from “outside” the paths are different—causal invariance implies that the network of relationships between causal events (which is all that’s relevant when one’s inside the system) will always be exactly the same.
In other words—much as in the case of relativity—even though from outside the system there may seem to be many possible “threads of time”, from inside the system causal invariance implies that there’s in a sense ultimately just one thread of time, or, in effect, one objective reality.
How does this all relate to the detailed standard formalism of quantum mechanics? It’s a little complicated. But let me make at least a few comments here. (There’s some more detail in my technical document; Jonathan Gorard has given even more.)
The states in the multiway system can be thought of as possible states of the quantum system. But how do we characterize how observers experience them? In particular, which states is the observer aware of when? Just like in the relativity case, the observer can in a sense make a choice of how they define time. One possibility might be through a foliation of the multiway system like this:
✕
Graph[ResourceFunction["MultiwaySystem"][{"A" -> "AB", "B" -> "A"}, {"A"}, 6, "StatesGraph"], AspectRatio -> 1/2, Epilog -> {ResourceFunction["WolframPhysicsProjectStyleData"][ "BranchialGraph", "EdgeStyle"], AbsoluteThickness[1.5], Table[Line[{{-8, i}, {10, i}}], {i, 1/2, 6 + 1/2}]}] |
In the formalism of quantum mechanics, one can then say that at each time, the observer experiences a superposition of possible states of the system. But now there’s a critical point. In direct analogy to the case of relativity, there are many different possible choices the observer can make about how to define time—and each of them corresponds to a different foliation of the multiway graph.
Again by analogy to relativity, we can then think of these choices as what we can call different “quantum observation frames”. Causal invariance implies that as long they respect the causal relationships in the graph, these frames can basically be set up in any way we want. In talking about relativity, it was useful to just have “tipped parallel lines” (“inertial frames”) representing observers who are moving uniformly in space.
In talking about quantum mechanics, other frames are useful. In particular, in the standard formalism of quantum mechanics, it’s common to talk about “quantum measurement”: essentially the act of taking a quantum system and determining some definite (essentially classical) outcome from it. Well, in our setup, a quantum measurement basically corresponds to a particular quantum observation frame.
Here’s an example:
✕
(*https://www.wolframcloud.com/obj/wolframphysics/TechPaper-Programs/\ Section-08/QM-foliations-01.wl*) CloudGet["https://wolfr.am/LbdPPaXZ"]; Magnify[ With[{graph = Graph[ResourceFunction["MultiwaySystem"][{"A" -> "AB"}, {"AA"}, 7, "StatesGraph"], VertexShapeFunction -> {Alternatives @@ VertexList[ ResourceFunction[ "GenerationalMultiwaySystem"][{"A" -> "AB"}, {"AA"}, 5, "StatesGraph"]] -> (Text[ Framed[Style[stripMetadata[#2] , Hue[0, 1, 0.48]], Background -> Directive[Opacity[.6], Hue[0, 0.45, 0.87]], FrameMargins -> {{2, 2}, {0, 0}}, RoundingRadius -> 0, FrameStyle -> Directive[Opacity[0.5], Hue[0, 0.52, 0.8200000000000001]]], #1, {0, 0}] &)}, VertexCoordinates -> (Thread[ VertexList[#] -> GraphEmbedding[#, Automatic, 2]] &[ ResourceFunction["MultiwaySystem"][{"A" -> "AB"}, {"AA"}, 8, "StatesGraph"]])]}, Show[graph, foliationGraphics[graph, #, {0.1, 0.05}, Directive[Hue[0.89, 0.97, 0.71], AbsoluteThickness[1.5]]] & /@ {{{"AA"}}, {{ "AA", "AAB", "ABA"}}, {{ "AA", "AAB", "ABA", "AABB", "ABAB", "ABBA"}}, {{ "AA", "AAB", "ABA", "AABB", "ABAB", "ABBA", "AABBB", "ABABB", "ABBAB", "ABBBA"}}, {{ "AA", "AAB", "ABA", "AABB", "ABAB", "ABBA", "AABBB", "ABABB", "ABBAB", "ABBBA", "AABBBB", "ABABBB", "ABBABB", "ABBBAB", "ABBBBA"}, { "AA", "AAB", "ABA", "AABB", "ABAB", "ABBA", "AABBB", "ABABB", "ABBAB", "ABBBA", "AABBBB", "ABABBB", "ABBABB", "ABBBAB", "ABBBBA", "AABBBBB", "ABABBBB", "ABBBBAB", "ABBBBBA"}, { "AA", "AAB", "ABA", "AABB", "ABAB", "ABBA", "AABBB", "ABABB", "ABBAB", "ABBBA", "AABBBB", "ABABBB", "ABBABB", "ABBBAB", "ABBBBA", "AABBBBB", "ABABBBB", "ABBBBAB", "ABBBBBA", "AABBBBBB", "ABABBBBB", "ABBBBBAB", "ABBBBBBA"}, { "AA", "AAB", "ABA", "AABB", "ABAB", "ABBA", "AABBB", "ABABB", "ABBAB", "ABBBA", "AABBBB", "ABABBB", "ABBABB", "ABBBAB", "ABBBBA", "AABBBBB", "ABABBBB", "ABBBBAB", "ABBBBBA", "AABBBBBB", "ABABBBBB", "ABBBBBAB", "ABBBBBBA", "AABBBBBBB", "ABABBBBBB", "ABBBBBBAB", "ABBBBBBBA"}}}]], 0.9] |
The successive pink lines effectively mark off what the observer is considering to be successive moments in time. So when all the lines bunch up below the state ABBABB what it means is that the observer is effectively choosing to “freeze time” for that state. In other words, the observer is saying “that’s the state I consider the system to be in, and I’m sticking to it”. Or, put another way, even though in the full multiway graph there’s all sorts of other “quantum mechanical” evolution of states going on, the observer has set up their quantum observation frame so that they pick out just a particular, definite, classical-like outcome.
OK, but can they consistently do that? Well, that depends on the actual underlying structure of the multiway graph, which ultimately depends on the actual underlying rule. In the example above, we’ve set up a foliation (i.e. a quantum observation frame) that does the best possible job in this rule at “freezing time” for the ABBABB state. But just how long can this “reality distortion field” be maintained?
The only way to keep the foliation consistent in the multiway graph above is to have it progressively expand over time. In other words, to keep time frozen, more and more quantum states have to be pulled into the “reality distortion field”, and so there’s less and less coherence in the system.
The picture above is for a very trivial rule. Here’s a corresponding picture for a slightly more realistic case:
✕
(*https://www.wolframcloud.com/obj/wolframphysics/TechPaper-Programs/\ Section-08/QM-foliations-01.wl*) CloudGet["https://wolfr.am/LbdPPaXZ"]; Show[drawFoliation[ Graph[ResourceFunction["MultiwaySystem"][{"A" -> "AB", "B" -> "A"}, {"A"}, 6, "StatesGraph"], VertexShapeFunction -> {Alternatives @@ VertexList[ ResourceFunction["GenerationalMultiwaySystem"][{"A" -> "AB", "B" -> "A"}, {"A"}, 5, "StatesGraph"]] -> (Text[ Framed[Style[stripMetadata[#2] , Hue[0, 1, 0.48]], Background -> Directive[Opacity[.2], Hue[0, 0.45, 0.87]], FrameMargins -> {{2, 2}, {0, 0}}, RoundingRadius -> 0, FrameStyle -> Directive[Opacity[0.5], Hue[0, 0.52, 0.8200000000000001]]], #1, {0, 0}] &)}], {{"A", "AB", "AA", "ABB", "ABA"}, {"A", "AB", "AA", "ABB", "ABA", "AAB", "ABBB"}, {"A", "AB", "AA", "ABB", "ABA", "AAB", "ABBB", "AABB", "ABBBB"}}, {0.1, 0}, Directive[Hue[0.89, 0.97, 0.71], AbsoluteThickness[1.5]]], Graphics[{Directive[Hue[0.89, 0.97, 0.71], AbsoluteThickness[1.5]], AbsoluteThickness[1.6`], Line[{{-3.35, 4.05}, {-1.85, 3.3}, {-0.93, 2.35}, {-0.93, 1.32}, {0.23, 1.32}, {0.23, 2.32}, {2.05, 2.32}, {2.05, 1.51}, {1.15, 1.41}, {1.15, 0.5}, {2.15, 0.5}, {2.25, 1.3}, {4.3, 1.3}, {4.6, 0.5}, {8.6, 0.5}}]}]] |
And what we see here is that—even in this still incredibly simplified case—the structure of the multiway system will force the observer to construct a more and more elaborate foliation if they are to successfully freeze time. Measurement in quantum mechanics has always involved a slightly uncomfortable mathematical idealization—and this now gives us a sense of what’s really going on. (The situation is ultimately very similar to the problem of decoding “encrypted” thermodynamic initial conditions that I mentioned above.)
Quantum measurement is really about what an observer perceives. But if you are for example trying to construct a quantum computer, it’s not just a question of having a qubit be perceived as being maintained in a particular state; it actually has to be maintained in that state. And for this to be the case we actually have to freeze time for that qubit. But here’s a very simplified example of how that can happen in a multiway graph:
✕
(*https://www.wolframcloud.com/obj/wolframphysics/TechPaper-Programs/\ Section-08/QM-foliations-01.wl*) \ CloudGet["https://wolfr.am/LbdPPaXZ"]; Magnify[ Show[With[{graph = Graph[ResourceFunction["MultiwaySystem"][{"A" -> "AB", "XABABX" -> "XXXX"}, {"XAAX"}, 6, "StatesGraph"], VertexCoordinates -> Append[(Thread[ VertexList[#] -> GraphEmbedding[#, Automatic, 2]] &[ ResourceFunction["MultiwaySystem"][{"A" -> "AB", "XABABX" -> "XXXX"}, {"XAAX"}, 8, "StatesGraph"]]), "XXXX" -> {0, 5.5}]]}, Show[graph, foliationGraphics[graph, #, {0.1, 0.05}, Directive[Hue[0.89, 0.97, 0.71], AbsoluteThickness[1.5]]] & /@ { Sequence[{{"XAAX"}}, {{"XAAX", "XAABX", "XABAX"}}, {{ "XAAX", "XAABX", "XABAX", "XAABBX", "XABABX", "XABBAX"}}, {{ "XAAX", "XAABX", "XABAX", "XAABBX", "XABABX", "XABBAX", "XAABBBX", "XABABBX", "XABBABX", "XABBBAX"}}, {{ "XAAX", "XAABX", "XABAX", "XAABBX", "XABABX", "XABBAX", "XAABBBX", "XABABBX", "XABBABX", "XABBBAX", "XAABBBBX", "XABABBBX", "XABBABBX", "XABBBABX", "XABBBBAX"}, { "XAAX", "XAABX", "XABAX", "XAABBX", "XABABX", "XABBAX", "XAABBBX", "XABABBX", "XABBABX", "XABBBAX", "XAABBBBX", "XABABBBX", "XABBABBX", "XABBBABX", "XABBBBAX", "XAABBBBBX", "XABABBBBX", "XABBBBABX", "XABBBBBAX", "XABBABBBX", "XABBBABBX"}}, {}, {}]}]]], .6] |
All this discussion of “freezing time” might seem weird, and not like anything one usually talks about in physics. But actually, there’s a wonderful connection: the freezing of time we’re talking about here can be thought of as happening because we’ve got the analog in the space of quantum states of a black hole in physical space.
The picture above makes it plausible that we’ve got something where things can go in, but if they do, they always get stuck. But there’s more to it. If you’re an observer far from a black hole, then you’ll never actually see anything fall into the black hole in finite time (that’s why black holes are called “frozen stars” in Russian). And the reason for this is precisely because (according to the mathematics) time is frozen at the event horizon of the black hole. In other words, to successfully make a qubit, you effectively have to isolate it in quantum space like things get isolated in physical space by the presence of the event horizon of a black hole.
General relativity and quantum mechanics are the two great foundational theories of current physics. And in the past it’s often been a struggle to reconcile them. But one of the beautiful outcomes of our project so far has been the realization that at some deep level general relativity and quantum mechanics are actually the same idea. It’s something that (at least so far) is only clear in the context of our models. But the basic point is that both theories are consequences of causal invariance—just applied in different situations.
Recall our discussion of causal graphs in the context of relativity above. We drew foliations and said that if we looked at a particular slice, it would tell us the arrangement of the system in space at what we consider to be a particular time. So now let’s look at multiway graphs. We saw in the previous section that in quantum mechanics we’re interested in foliations of these. But if we look at a particular slice in one of these foliations, what does it represent? The foliation has got a bunch of states in it. And it turns out that we can think of them as being laid out in an abstract kind of space that we’re calling “branchial space”.
To make sense of this space, we have to have a way to say what’s near what. But actually the multiway graph gives us that. Take a look at this multiway graph:
✕
foliationLines[{lineDensityHorizontal_ : 1, lineDensityVertical_ : 1}, {tanHorizontal_ : 0.0, tanVertical_ : 0.0}, offset : {_, _} : {0, 0}, lineStyles : {_, _} : {Red, Red}, transform_ : (# &)] := {If[lineDensityHorizontal != 0, Style[Table[ Line[transform /@ {{-100 + First@offset, k - 100 tanHorizontal + Last@offset}, {100 + First@offset, k + 100 tanHorizontal + Last@offset}}], {k, -100.5, 100.5, 1/lineDensityHorizontal}], First@lineStyles], {}], If[lineDensityVertical != 0, Style[Table[ Line[transform /@ {{k - 100 tanVertical + First@offset, -100 + Last@offset}, {k + 100 tanVertical + First@offset, 100 + Last@offset}}], {k, -100.5, 100.5, 1/lineDensityVertical}], Last@lineStyles], {}]}; LayeredGraphPlot[ ResourceFunction["MultiwaySystem"][{"A" -> "AB", "B" -> "A"}, "A", 5, "EvolutionGraph"], Epilog -> foliationLines[{1, 0}, {0, 0}, {0, 0}, {ResourceFunction["WolframPhysicsProjectStyleData"][ "BranchialGraph", "EdgeStyle"], ResourceFunction["WolframPhysicsProjectStyleData"][ "BranchialGraph", "EdgeStyle"]}]] |
At each slice in the foliation, let’s draw a graph where we connect two states whenever they’re both part of the same “branch pair”, so that—like AA and ABB here—they both come from the same state on the slice before. Here are the graphs we get by doing this for successive slices:
✕
Table[ResourceFunction["MultiwaySystem"][{"A" -> "AB", "B" -> "A"}, "A", t, If[t <= 5, "BranchialGraph", "BranchialGraphStructure"]], {t, 2, 8}] |
We call these branchial graphs. And we can think of them as representing the correlation—or entanglement—of quantum states. Two states that are nearby in the graph are highly entangled; those further away, less so. And we can imagine that as our system evolves, we’ll get larger and larger branchial graphs, until eventually, just like for our original hypergraphs, we can think of these graphs as limiting to something like a continuous space.
But what is this space like? For our original hypergraphs, we imagined that we’d get something like ordinary physical space (say close to three-dimensional Euclidean space). But branchial space is something more abstract—and much wilder. And typically it won’t even be finite-dimensional. (It might approximate a projective Hilbert space.) But we can still think of it mathematically as some kind of space.
OK, things are getting fairly complicated here. But let me try to give at least a flavor of how things work. Here’s an example of a wonderful correspondence: curvature in physical space is like the uncertainty principle of quantum mechanics. Why do these have anything to do with each other?
The uncertainty principle says that if you measure, say, the position of something, then its momentum, you’ll get a different answer than if you do it in the opposite order. But now think about what happens when you try to make a rectangle in physical space by going in direction x first, then y, and then you do these in the opposite order. In a flat space, you’ll get to the same place. But in a curved space, you won’t:
✕
parallelTransportOnASphere[size_] := Module[{\[Phi], \[Theta]}, With[{spherePoint = {Cos[\[Phi]] Sin[\[Theta]], Sin[\[Phi]] Sin[\[Theta]], Cos[\[Theta]]}}, Graphics3D[{{Lighter[Yellow, .2], Sphere[]}, First@ParametricPlot3D[ spherePoint /. \[Phi] -> 0, {\[Theta], \[Pi]/2, \[Pi]/2 - size}, PlotStyle -> Darker@Red], Rotate[First@ ParametricPlot3D[ spherePoint /. \[Phi] -> 0, {\[Theta], \[Pi]/2, \[Pi]/2 - size}, PlotStyle -> Darker@Red], \[Pi]/2, {-1, 0, 0}], Rotate[First@ ParametricPlot3D[ spherePoint /. \[Phi] -> 0, {\[Theta], \[Pi]/2, \[Pi]/2 - size}, PlotStyle -> Darker@Red], size, {0, 0, 1}], Rotate[Rotate[ First@ParametricPlot3D[ spherePoint /. \[Phi] -> 0, {\[Theta], \[Pi]/2, \[Pi]/2 - size}, PlotStyle -> Darker@Red], \[Pi]/2, {-1, 0, 0}], size, {0, -1, 0}]}, Boxed -> False, SphericalRegion -> False, Method -> {"ShrinkWrap" -> True}, ViewPoint -> {2, size, size}]]]; parallelTransportOnASphere[0 | 0.] := parallelTransportOnASphere[1.*^-10]; parallelTransportOnASphere[0.7] |
And essentially what’s happening in the uncertainty principle is that you’re doing exactly this, but in branchial space, rather than physical space. And it’s because branchial space is wild—and effectively very curved—that you get the uncertainty principle.
Alright, so the next question might be: what’s the analog of the Einstein equations in branchial space? And again, it’s quite wonderful: at least in some sense, the answer is that it’s the path integral—the fundamental mathematical construct of modern quantum mechanics and quantum field theory.
This is again somewhat complicated. But let me try to give a flavor of it. Just as we discussed geodesics as describing paths traversed through physical space in the course of time, so also we can discuss geodesics as describing paths traversed through branchial space in the course of time. In both cases these geodesics are determined by curvature in the corresponding space. In the case of physical space, we argued (roughly) that the presence of excess causal edges—corresponding to energy—would lead to what amounts to curvature in the spatial hypergraph, as described by Einstein’s equations.
OK, so what about branchial space? Just like for the spatial hypergraph, we can think about the causal connections between the updating events that define the branchial graph. And we can once again imagine identifying the flux of causal edges—now not through spacelike hypersurfaces, but through branchlike ones—as corresponding to energy. And—much like in the spatial hypergraph case—an excess of these causal edges will have the effect of producing what amounts to curvature in branchial space (or, more strictly, in branchtime—the analog of spacetime). But this curvature will then affect the geodesics that traverse branchial space.
In general relativity, the presence of mass (or energy) causes curvature in space which causes the paths of geodesics to turn—which is what is normally interpreted as the action of the force of gravity. But now we have an analog in quantum mechanics, in our branchial space. The presence of energy effectively causes curvature in branchial space which causes the paths of geodesics through branchial space to turn.
What does turning correspond to? Basically it’s exactly what the path integral talks about. The path integral (and the usual formalism of quantum mechanics) is set up in terms of complex numbers. But it can just as well be thought of in terms of turning through an angle. And that’s exactly what’s happening with our geodesics in branchial space. In the path integral there’s a quantity called the action—which is a kind of relativistic analog of energy—and when one works things out more carefully, our fluxes of causal edges correspond to the action, but are also exactly what determine the rate of turning of geodesics.
It all fits together beautifully. In physical space we have Einstein’s equations—the core of general relativity. And in branchial space (or, more accurately, multiway space) we have Feynman’s path integral—the core of modern quantum mechanics. And in the context of our models they’re just different facets of the same idea. It’s an amazing unification that I have to say I didn’t see coming; it’s something that just emerged as an inevitable consequence of our simple models of applying rules to collections of relations, or hypergraphs.
We can think of motion in physical space as like the process of exploring new elements in the spatial hypergraph, and potentially becoming affected by them. But now that we’re talking about branchial space, it’s natural to ask whether there’s something like motion there too. And the answer is that there is. And it’s basically exactly the same kind of thing: but instead of exploring new elements in the spatial hypergraph, we’re exploring new elements in the branchial graph, and potentially becoming affected by them.
There’s a way of talking about it in the standard language of quantum mechanics: as we move in branchial space, we’re effectively getting “entangled” with more and more quantum states.
OK, so let’s take the analogy further. In physical space, there’s a maximum speed of motion—the speed of light, c. So what about in branchial space? Well, in our models we can see that there’s also got to be a maximum speed of motion in branchial space. Or, in other words, there’s a maximum rate at which we can entangle with new quantum states.
In physical space we talk about light cones as being the regions that can be causally affected by some event at a particular location in space. In the same way, we can talk about entanglement cones that define regions in branchial space that can be affected by events at some position in branchial space. And just as there’s a causal graph that effectively knits together elementary light cones, there’s something similar that knits together entanglement cones.
That something similar is the multiway causal graph: a graph that represents causal relationships between all events that can happen anywhere in a multiway system. Here’s an example of a multiway causal graph for just a few steps of a very simple string substitution system—and it’s already pretty complicated:
✕
LayeredGraphPlot[ Graph[ResourceFunction["MultiwaySystem"][ "WolframModel" -> { {{x, y}, {x, z}} -> {{y, w}, {y, z}, {w, x}}}, {{{0, 0}, {0, 0}}}, 6, "CausalGraphStructure"]]] |
But in a sense the multiway causal graph is the most complete description of everything that can affect the experience of observers. Some of the causal relationships it describes represent spacelike connections; some represent branchlike connections. But all of them are there. And so in a sense the multiway causal graph is where relativity and quantum mechanics come together. Slice one way and you’ll see relationships in physical space; slice another way and you’ll see relationships in branchial space, between quantum states.
To help see how this works here’s a very toy version of a multiway causal graph:
✕
Graph3D[ResourceFunction["GeneralizedGridGraph"][{4 -> "Directed", 4, 4}, EdgeStyle -> {Darker[Blue], Darker[Blue], Purple}]] |
Each point is an event that happens in some hypergraph on some branch of a multiway system. And now the graph records the causal relationship of that event to other ones. In this toy example, there are purely timelike relationships—indicated by arrows pointing down—in which basically some element of the hypergraph is affecting its future self. But then there are both spacelike and branchlike relationships, where the event affects elements that are either “spatially” separated in the hypergraph, or “branchially” separated in the multiway system.
But in all this complexity, there’s something wonderful that happens. As soon as the underlying rule has causal invariance, this implies all sorts of regularities in the multiway causal graph. And for example it tells us that all those causal graphs we get by taking different branchtime slices are actually the same when we project them into spacetime—and this is what leads to relativity.
But causal invariance has other consequences too. One of them is that there should be an analog of special relativity that applies not in spacetime but in branchtime. The reference frames of special relativity are now our quantum observation frames. And the analog of speed in physical space is the rate of entangling new quantum states.
So what about a phenomenon like relativistic time dilation? Is there an analog of that for motion in branchial space? Well, actually, yes there is. And it turns out to be what’s sometimes called the quantum Zeno effect: if you repeatedly measure a quantum system fast enough it won’t change. It’s a phenomenon that’s implied by the add-ons to the standard formalism of quantum mechanics that describe measurement. But in our models it just comes directly from the analogy between branchial and physical space.
Doing new measurements is equivalent to getting entangled with new quantum states—or to moving in branchial space. And in direct analogy to what happens in special relativity, as you get closer to moving at the maximum speed you inevitably sample things more slowly in time—and so you get time dilation, which means that your “quantum evolution” slows down.
OK, so there are relativistic phenomena in physical space, and quantum analogs in branchial space. But in our models these are all effectively facets of one thing: the multiway causal graph. So are there situations in which the two kinds of phenomena can mix? Normally there aren’t: relativistic phenomena involve large physical scales; quantum phenomena tend to involve small ones.
But one example of an extreme situation where they can mix is black holes. I’ve mentioned several times that the formation of an event horizon around a black hole is associated with disconnection in the causal graph. But it’s more than that. It’s actually disconnection not only in the spacetime causal graph, but in the full multiway causal graph. And that means that there’s not only an ordinary causal event horizon—in physical space—but also an “entanglement horizon” in branchial space. And just as a piece of the spatial hypergraph can get disconnected when there’s a black hole, so can a piece of the branchial graph.
What does this mean? There are a variety of consequences. One of them is that quantum information can be trapped inside the entanglement horizon even when it hasn’t crossed the causal event horizon—so that in effect the black hole is freezing quantum information “at its surface” (at least its surface in branchial space). It’s a weird phenomenon implied by our models, but what’s perhaps particularly interesting about it is that it’s very much aligned with conclusions about black holes that have emerged in some of the latest work in physics on the so-called holographic principle in quantum field theory and general relativity.
Here’s another related, weird phenomenon. If you pass the causal event horizon of a black hole, it’s an inevitable fact that you’ll eventually get infinitely physically elongated (or “spaghettified”) by tidal forces. Well, something similar happens if you pass the entanglement horizon—except now you’ll get elongated in branchial space rather than physical space. And in our models, this eventually means you won’t be able to make a quantum measurement—so in a sense as an observer you won’t be able to “form a classical thought”, or, in other words, beyond the entanglement horizon you’ll never be able to “come to a definite conclusion” about, for example, whether something fell into the black hole or didn’t.
The speed of light c is a fundamental physical constant that relates distance in physical space to time. In our models, there’s now a new fundamental physical constant: the maximum entanglement speed, that relates distance in branchial space to time. I call this maximum entanglement speed ζ (zeta) (ζ looks a bit like a “tangled c”). I’m not sure what its value is, but a possible estimate is that it corresponds to entangling about 10^{102} new quantum states per second. And in a sense the fact that this is so big is why we’re normally able to “form classical thoughts”.
Because of the relation between (multiway) causal edges and energy, it’s possible to convert ζ to units of energy per second, and our estimate then implies that ζ is about 10^{5} solar masses per second. It’s a big value, although conceivably not irrelevant to something like a merger of galactic black holes. (And, yes, this would mean that for an intelligence to “quantum grok” our galaxy would take maybe six months.)
I’m frankly amazed at how much we’ve been able to figure out just from the general structure of our models. But to get a final fundamental theory of physics we’ve still got to find a specific rule. A rule that gives us 3 (or so) dimensions of space, the particular expansion rate of the universe, the particular masses and properties of elementary particles, and so on. But how should we set about finding this rule?
And actually even before that, we need to ask: if we had the right rule, would we even know it? As I mentioned earlier, there’s potentially a big problem here with computational irreducibility. Because whatever the underlying rule is, our actual universe has applied it perhaps times. And if there’s computational irreducibility—as there inevitably will be—then there won’t be a way to fundamentally reduce the amount of computational effort that’s needed to determine the outcome of all these rule applications.
But what we have to hope is that somehow—even though the complete evolution of the universe is computationally irreducible—there are still enough “tunnels of computational reducibility” that we’ll be able to figure out at least what’s needed to be able to compare with what we know in physics, without having to do all that computational work. And I have to say that our recent success in getting conclusions just from the general structure of our models makes me much more optimistic about this possibility.
But, OK, so what rules should we consider? The traditional approach in natural science (at least over the past few centuries) has tended to be: start from what you know about whatever system you’re studying, then try to “reverse engineer” what its rules are. But in our models there’s in a sense too much emergence for this to work. Look at something like this:
✕
ResourceFunction[ "WolframModel"][{{1, 2, 2}, {2, 3, 4}} -> {{4, 3, 3}, {4, 1, 5}, {2, 4, 5}}, {{0, 0, 0}, {0, 0, 0}}, 500, "FinalStatePlot"] |
Given the overall form of this structure, would you ever figure that it could be produced just by the rule:
{{x, y, y}, {y, z, u}} → {{u, z, z}, {u, x, v}, {y, u, v}}
✕
RulePlot[ResourceFunction[ "WolframModel"][{{x, y, y}, {y, z, u}} -> {{u, z, z}, {u, x, v}, {y, u, v}}]] |
Having myself explored the computational universe of simple programs for some forty years, I have to say that even now it’s amazing how often I’m humbled by the ability of extremely simple rules to give behavior I never expected. And this is particularly common with the very structureless models we’re using here. So in the end the only real way to find out what can happen in these models is just to enumerate possible rules, and then run them and see what they do.
But now there’s a crucial question. If we just start enumerating very simple rules, how far are we going to have to go before we find our universe? Or, put another way, just how simple is the rule for our universe going to end up being?
It could have been that in a sense the rule for the universe would have a special case in it for every element of the universe—every particle, every position in space, etc. But the very fact that we’ve been able to find definite scientific laws—and that systematic physics has even been possible—suggests that the rule at least doesn’t have that level of complexity. But how simple might it be? We don’t know. And I have to say that I don’t think our recent discoveries shed any particular light on this—because they basically say that lots of things in physics are generic, and independent of the specifics of the underlying rule, however simple or complex it may be.
But, OK, let’s say we find that our universe can be described by some particular rule. Then the obvious immediate question would be: why that rule, and not another? The history of science—certainly since Copernicus—has shown us over and over again evidence that we’re “not special”. But if the rule we find to describe our universe is simple, wouldn’t that simplicity be a sign of “specialness”?
I have long wondered about this. Could it for example be that the rule is only simple because of the way that we, as entities existing in our particular universe, choose to set up our ways of describing things? And that in some other universe, with some other rule, the entities that exist there would set up their ways of describing things so that the rule for their universe is simple to them, even though it might be very complex to us?
Or could it be that in some fundamental sense it doesn’t matter what the rules for the universe are: that to observers embedded in a universe, operating according to the same rules as that universe, the conclusions about how the universe works will always be the same?
Or could it be that this is a kind of question that’s just outside the realm of science?
To my considerable surprise, the paradigm that’s emerging from our recent discoveries potentially seems to suggest a definite—though at first seemingly bizarre—scientific answer.
In what we’ve discussed so far we’re imagining that there’s a particular, single rule for our universe, that gets applied over and over again, effectively in all possible ways. But what if there wasn’t just one rule that could be used? What if all conceivable rules could be used? What if every updating event could just use any possible rule? (Notice that in a finite universe, there are only ever finitely many rules that can ever apply.)
At first it might not seem as if this setup would ever lead to anything definite. But imagine making a multiway graph of absolutely everything that can happen—including all events for all possible rules. This is a big, complicated object. But far from being structureless, it’s full of all kinds of structure.
And there’s one very important thing about it: it’s basically guaranteed to have causal invariance (basically because if there’s a rule that does something, there’s always another rule somewhere that can undo it).
So now we can make a rule-space multiway causal graph—which will show a rule-space analog of relativity. And what this means is that in the rule-space multiway graph, we can expect to make different foliations, but have them all give consistent results.
It’s a remarkable conceptual unification. We’ve got physical space, branchial space, and now also what we can call rulial space (or just rule space). And the same overall ideas and principles apply to all of them. And just as we defined reference frames in physical space and branchial space, so also we can define reference frames in rulial space.
But what kinds of reference frames might observers set up in rulial space? In a typical case we can think of different reference frames in rulial space as corresponding to different description languages in which an observer can describe their experience of the universe.
In the abstract, it’s a familiar idea that given any particular description language, we can always explicitly program any universal computer to translate it to another description language. But what we’re saying here is that in rulial space it just takes choosing a different reference frame to have our representation of the universe use a different description language.
And roughly the reason this works is that different foliations of rulial space correspond to different choices of sequences of rules in the rule-space multiway graph—which can in effect be set up to “compute” the output that would be obtained with any given description language. That this can work ultimately depends on the fact that sequences of our rules can support universal computation (which the Principle of Computational Equivalence implies they ubiquitously will)—which is in effect why it only takes “choosing a different reference frame in rule space” to “run a different program” and get a different description of the observed behavior of the universe.
It’s a strange but rather appealing picture. The universe is effectively using all possible rules. But as entities embedded in the universe, we’re picking a particular foliation (or sequence of reference frames) to make sense of what’s happening. And that choice of foliation corresponds to a description language which gives us our particular way of describing the universe.
But what is there to say definitely about the universe—independent of the foliation? There’s one immediate thing: that the universe, whatever foliation one uses to describe it, is just a universal computer, and nothing more. And that hypercomputation is never possible in the universe.
But given the structure of our models, there’s more. Just like there’s a maximum speed in physical space (the speed of lightc), and a maximum speed in branchial space (the maximum entanglement speed ζ), so also there must be a maximum speed in rulial space, which we can call ρ—that’s effectively another fundamental constant of nature. (The constancy of ρ is in effect a reflection of the Principle of Computational Equivalence.)
But what does moving in rulial space correspond to? Basically it’s a change of rule. And to say that this can only happen at a finite speed is to say that there’s computational irreducibility: that one rule cannot emulate another infinitely fast. And given this finite “speed of emulation” there are “emulation cones” that are the analog of light cones, and that define how far one can get in rulial space in a certain amount of time.
What are the units of ρ? Essentially they are program length divided by time. But whereas in the theory of computation one typically imagines that program length can be scaled almost arbitrarily by different models of computation, here this is a measure of program length that’s somehow fundamentally anchored to the structure of the rule-space multiway system, and of physics. (By the way, there’ll be an analog of curvature and Einstein’s equations in rulial space too—and it probably corresponds to a geometrization of computational complexity theory and questions like P?=NP.)
There’s more to say about the structure of rulial space. For example, let’s imagine we try to make a foliation in which we freeze time somewhere in rulial space. That’ll correspond to trying to describe the universe using some computationally reducible model—and over time it’ll get more and more difficult to maintain this as emulation cones effectively deliver more and more computational irreducibility.
So what does all this mean for our original goal—of finding a rule to describe our universe? Basically it’s saying that any (computation universal) rule will do—if we’re prepared to craft the appropriate description language. But the point is that we’ve basically already defined at least some elements of our description language: they are the kinds of things our senses detect, our measuring devices measure, and our existing physics describes. So now our challenge is to find a rule that successfully describes our universe within this framework.
For me this is a very satisfactory solution to the mystery of why some particular rule would be picked for our universe. The answer is that there isn’t ultimately ever a particular rule; basically any rule capable of universal computation will do. It’s just that—with some particular mode of description that we choose to use—there will be some definite rule that describes our universe. And in a sense whatever specialness there is to this rule is just a reflection of the specialness of our mode of description. In effect, the only thing special about the universe to us is us ourselves.
And this suggests a definite answer to another longstanding question: could there be other universes? The answer in our setup is basically no. We can’t just “pick another rule and get another universe”. Because in a sense our universe already contains all possible rules, so there can only be one of it. (There could still be other universes that do various levels of hypercomputation.)
But there is something perhaps more bizarre that is possible. While we view our universe—and reality—through our particular type of description language, there are endless other possible description languages which can lead to descriptions of reality that will seem coherent (and even in some appropriate definition “meaningful”) within themselves, but which will seem to us to correspond to utterly incoherent and meaningless aspects of our universe.
I’ve always assumed that any entity that exists in our universe must at least “experience the same physics as us”. But now I realize that this isn’t true. There’s actually an almost infinite diversity of different ways to describe and experience our universe, or in effect an almost infinite diversity of different “planes of existence” for entities in the universe—corresponding to different possible reference frames in rulial space, all ultimately connected by universal computation and rule-space relativity.
What does it mean to make a model for the universe? If we just want to know what the universe does, well, then we have the universe, and we can just watch what it does. But when we talk about making a model, what we really mean is that we want to have a representation of the universe that somehow connects it to what we humans can understand. Given computational irreducibility, it’s not that we expect a model that will in any fundamental sense “predict in advance” the precise behavior of the universe down to every detail (like that I am writing this sentence now). But we do want to be able to point to the model—whose structure we understand—and then be able to say that this model corresponds to our universe.
In the previous section we said that we wanted to find a rule that we could in a sense connect with the description language that we use for the universe. But what should the description language for the rule itself be? Inevitably there is a great computational distance between the underlying rule and features of the universe that we’re used to describing. So—as I’ve said several times here in different ways—we can’t expect to use the ordinary concepts with which we describe the world (or physics) directly in the construction of the rule.
I’ve spent the better part of my life as a language designer, primarily building what’s now the full-scale computational language that is the Wolfram Language. And I now view the effort to find a fundamental theory of physics as in many ways just another challenge in language design—perhaps even the ultimate such challenge.
In designing a computational language what one is really trying to do is to create a bridge between two domains: the abstract world of what is possible to do computationally, and the “mental” world of what people understand and are interested in doing. There are all sorts of computational processes that one can invent (say running randomly picked cellular automaton rules), but the challenge in language design is to figure out which ones people care about at this point in human history, and then to give people a way to describe these.
Usually in computational language design one is leveraging human natural language—or the more formal languages that have been developed in mathematics and science—to find words or their analogs to refer to particular “lumps of computation”. But at least in the way I have done it, the essence of language design is to try to find the purest primitives that can be expressed this way.
OK, so let’s talk about setting up a model for the universe. Perhaps the single most important idea in my effort to find a fundamental theory of physics is that the theory should be based on the general computational paradigm (and not, for example, specifically on mathematics). So when we talk about having a language in which to describe our model of the universe we can see that it has to bridge three different domains. It has to be a language that humans can understand. It has to be a language that can express computational ideas. And it has to be a language that can actually represent the underlying structure of physics.
So what should this language be like? What kinds of primitives should it contain? The history that has led me to what I describe here is in many ways the history of my attempts to formulate an appropriate language. Is it trivalent graphs? Is it ordered graphs? Is it rules applied to abstract relations?
In many ways, we are inevitably skating at the edge of what humans can understand. Maybe one day we will have built up familiar ways of talking about the concepts that are involved. But for now, we don’t have these. And in a sense what has made this project feasible now is that we’ve come so far in developing ways to express computational ideas—and that through the Wolfram Language in particular those forms of expression have become familiar, at the very least to me.
And it’s certainly satisfying to see that the basic structure of the models we’re using can be expressed very cleanly and succinctly in the Wolfram Language. In fact, in what perhaps can be viewed as some sort of endorsement of the structure of the Wolfram Language, the models are in a sense just a quintessential example of transformation rules for symbolic expressions, which is exactly what the Wolfram Language is based on. But even though the structure is well represented in the Wolfram Language, the “use case” of “running the universe” is different from what the Wolfram Language is normally set up to do.
In the effort to serve what people normally want, the Wolfram Language is primarily about taking input, evaluating it by doing computation, and then generating output. But that’s not what the universe does. The universe in a sense had input at the very beginning, but now it’s just running an evaluation—and with all our different ideas of foliations and so on, we are sampling certain aspects of that ongoing evaluation.
It’s computation, but it’s computation sampled in a different way than we’ve been used to doing it. To a language designer like me, this is something interesting in its own right, with its own scientific and technological spinoffs. And perhaps it will take more ideas before we can finish the job of finding a way to represent a rule for fundamental physics.
But I’m optimistic that we actually already have pretty much all the ideas we need. And we also have a crucial piece of methodology that helps us: our ability to do explorations through computer experiments. If we based everything on the traditional methodology of mathematics, we would in effect only be able to explore what we somehow already understood. But in running computer experiments we are in effect sampling the raw computational universe of possibilities, without being limited by our existing understanding.
Of course, as with physical experiments, it matters how we define and think about our experiments, and in effect what description language we use. But what certainly helps me, at least, is that I’ve now been doing computer experiments for more than forty years, and over that time I’ve been able to slowly refine the art and science of how best to do them.
In a way it’s very much like how we learn from our experience in the physical world. From seeing the results of many experiments, we gradually build up intuition, which in turn lets us start creating a conceptual framework, which then informs the design of our language for describing things. One always has to keep doing experiments, though. In a sense computational irreducibility implies that there will always be surprises, and that’s certainly what I constantly find in practice, not least in this project.
Will we be able to bring together physics, computation and human understanding to deliver what we can reasonably consider to be a final, fundamental theory of physics? It is difficult to know how hard this will be. But I am extremely optimistic that we are finally on the right track, and may even have effectively already solved the fascinating problem of language design that this entails.
OK, so given all this, what’s it going to take to find the fundamental theory of physics? The most important thing—about which I’m extremely excited—is that I think we’re finally on the right track. Of course, perhaps not surprisingly, it’s still technically difficult. Part of that difficulty comes directly from computational irreducibility and from the difficulty of working out the consequences of underlying rules. But part of the difficulty also comes from the very success and sophistication of existing physics.
In the end our goal must be to build a bridge that connects our models to existing knowledge about physics. And there is difficult work to do on both sides. Trying to frame the consequences of our models in terms that align with existing physics, and trying to frame the (usually mathematical) structures of existing physics in terms that align with our models.
For me, one of the most satisfying aspects of our discoveries over the past couple of months has been the extent to which they end up resonating with a huge range of existing—sometimes so far seemingly “just mathematical”—directions that have been taken in physics in recent years. It almost seems like everyone has been right all along, and it just takes adding a new substrate to see how it all fits together. There are hints of string theory, holographic principles, causal set theory, loop quantum gravity, twistor theory, and much more. And not only that, there are also modern mathematical ideas—geometric group theory, higher-order category theory, non-commutative geometry, geometric complexity theory, etc.—that seem so well aligned that one might almost think they must have been built to inform the analysis of our models.
I have to say I didn’t expect this. The ideas and methods on which our models are based are very different from what’s ever been seriously pursued in physics, or really even in mathematics. But somehow—and I think it’s a good sign all around—what’s emerged is something that aligns wonderfully with lots of recent work in physics and mathematics. The foundations and motivating ideas are different, but the methods (and sometimes even the results) often look to be quite immediately applicable.
There’s something else I didn’t expect, but that’s very important. In studying things (like cellular automata) out in the computational universe of simple programs, I have normally found that computational irreducibility—and phenomena like undecidability—are everywhere. Try using sophisticated methods from mathematics; they will almost always fail. It is as if one hits the wall of irreducibility almost immediately, so there is almost nothing for our sophisticated methods, which ultimately rely on reducibility, to do.
But perhaps because they are so minimal and so structureless our models for fundamental physics don’t seem to work this way. Yes, there is computational irreducibility, and it’s surely important, both in principle and in practice. But the surprising thing is that there’s a remarkable depth of richness before one hits irreducibility. And indeed that’s where many of our recent discoveries come from. And it’s also where existing methods from physics and mathematics have the potential to make great contributions. But what’s important is that it’s realistic that they can; there’s a lot one can understand before one hits computational irreducibility. (Which is, by the way, presumably why we are fundamentally able to form a coherent view of physical reality at all.)
So how is the effort to try to find a fundamental theory of physics going to work in practice? We plan to have a centralized effort that will push forward with the project using essentially the same R&D methods that we’ve developed at Wolfram Research over the past three decades, and that have successfully brought us so much technology—not to mention what exists of this project so far. But we plan to do everything in a completely open way. We’ve already posted the full suite of software tools that we’ve developed, along with nearly a thousand archived working notebooks going back to the 1990s, and soon more than 400 hours of videos of recent working sessions.
We want to make it as easy for people to get involved as possible, whether directly in our centralized effort, or in separate efforts of their own. We’ll be livestreaming what we do, and soliciting as much interaction as possible. We’ll be running a variety of educational programs. And we also plan to have (livestreamed) working sessions with other individuals and groups, as well as providing channels for the computational publishing of results and intermediate findings.
I have to say that for me, working on this project both now and in past years has been tremendously exciting, satisfying, and really just fun. And I’m hoping many other people will be able to share in this as the project goes forward. I think we’ve finally got a path to finding the fundamental theory of physics. Now let’s go follow that path. Let’s have a blast. And let’s try to make this the time in human history when we finally figure out how this universe of ours works!
To comment, please visit the copy of this post at Stephen Wolfram’s Writings »
]]>The sparse ruler problem has been famously worked on by Paul Erdős, Marcel J. E. Golay, John Leech, Alfréd Rényi, László Rédei and Solomon W. Golomb, among many others. The problem is this: what is the smallest subset of so that the unsigned pairwise differences of give all values from 1 to ? One way to look at this is to imagine a blank yardstick. At what positions on the yardstick would you add 10 marks, so that you can measure any number of inches up to 36?
Another simple example is of size 3, which has differences , and . The sets of size 2 have only one difference. The minimal subset is not unique; the differences of also give .
Part of what makes the sparse ruler problem so compelling is its embodiment in an object inside every schoolchild’s desk—and its enduring appeal lies in its deceptive simplicity. Read on to see precisely just how complicated rulers, marks and recipes can be.
First, let’s review the rules and terminology used in the sparse ruler problem. A subset of a set covers if .
For example, what is the smallest subset of that covers the set ? The greatest number of differences for a subset of size 5 is , which is not enough to get 13 values. But a subset of size 6, with differences, is large enough. In this case, the subset covers , and so the size of the smallest covering subset of is at most 6.
Here are the differences using only :
✕
{1, 13, 9, 13, 6, 6, 13, 9, 9, 11, 11, 13, 13} - {0, 11, 6, 9, 1, 0, 6, 1, 0, 1, 0, 1, 0} |
Of the 15 differences, two are achieved twice: and . Here is a way to list the pairs explicitly:
✕
Column[SplitBy[ SortBy[Subsets[{0, 1, 6, 9, 11, 13}, {2}], Differences], Differences]] |
Let’s try another way to calculate the set of differences:
✕
Union@Abs@ Flatten@Outer[Subtract, {0, 1, 6, 9, 11, 13}, {0, 1, 6, 9, 11, 13}] |
Of the subsets that cover , let be the size of a smallest subset (there may be more than one).
The following table summarizes the values of for . Both and of size 3 cover ; note that after sorting:
✕
Text@Grid[Prepend[Table[With[{ruler = SplitToRuler[sparsedata[[n]]]}, {ruler, Row[{"[", n, "]"}], Length[ruler]}], {n, 1, 12}], {"a smallest\n subset\n", "differences [n]", Row[{" the smallest\nsubset size ", Style[Subscript["M", "n"], Italic], "\n"}]}]] |
In 1956, John Leech wrote “On the Representation of 1, 2, …, n by Differences,” which proved the bounds .
There are a few terms and “rules” to keep in mind when discussing the sparse ruler problem:
This length-135 sparse ruler is nonperfect:
✕
Length@{0, 1, 2, 3, 4, 5, 6, 65, 68, 71, 74, 81, 88, 95, 102, 109, 116, 123, 127, 131, 135} |
This length-138 sparse ruler is optimal:
✕
Length@{0, 1, 2, 3, 7, 14, 21, 28, 43, 58, 73, 88, 103, 111, 119, 127, 135, 136, 137, 138} |
Here is an optimal length-50 sparse ruler with 12 marks (i.e. ). The list of positions of the marks is the ruler form:
✕
ruler50 = {0, 1, 3, 6, 13, 20, 27, 34, 41, 45, 49, 50}; |
This visualizes the marks:
✕
Graphics[{ Thickness[.005], Line[{{#, 1}, {#, 1.5}}] & /@ Range@50, Line[{{#, 1}, {#, 5}}] & /@ ruler50 }, Axes -> {True, False}, Ticks -> {ruler50, None}, ImageSize -> 520] |
Let the differences between the marks be the diff form. Here is the diff form for ruler50:
✕
Differences[ruler50] |
In 1963, B. Wichmann wrote “A Note on Restricted Difference Bases,” in which he constructed many sparse rulers. The following code has his original recipe and a function to read the recipe:
✕
originalwichmannrecipe = { {1, 1 + r, 1 + 2 r, 3 + 4 r, 2 + 2 r, 1}, {r, 1, r, s, 1 + r, r}}; |
✕
WichmannRuler[recipe_, {x_, y_}] := Transpose[ Select[Transpose[recipe /. Thread[{r, s} -> {x, y}]], Min[#] > 0 &]] |
With that, we can set up function for Wichmann recipe #1:
✕
Subscript[W, 1][r_, s_] := WichmannRuler[originalwichmannrecipe, {r, s}]; |
There are thousands of Wichmann recipes. Here’s the second:
✕
WichmannRecipes[[2]] |
Here’s a function for Wichmann recipe #2:
✕
Subscript[W, 2][r_, s_] := WichmannRuler[WichmannRecipes[[2]], {r, s}]; |
Here, and in the recipes are replaced by 1 and 5, respectively. These representations are examples of the split form of a sparse ruler:
✕
Column[{Subscript[W, 1][1, 5], Subscript[W, 2][1, 5]}] |
We can use these functions to convert among the three forms of sparse ruler:
✕
DiffToRuler[diff_] := FoldList[Plus, 0, diff] |
✕
DiffToSplit[diff_] := {First /@ Split[diff], Length /@ Split[diff]} |
✕
SplitToDiff[split_] := Flatten[Table[#[[1]], {#[[2]]}] & /@ Transpose[split]] |
✕
SplitToRuler[split_] := DiffToRuler[SplitToDiff[split]] |
✕
RulerToSplit[ruler_] := DiffToSplit[Differences[ruler]] |
Here are the diff forms for both and ruler50 from above; we can see from their identical outputs that they are in fact the same ruler:
✕
SplitToDiff[Subscript[W, 1][1, 5]] |
✕
Differences[ruler50] |
The diff form can be used to remake the ruler:
✕
DiffToRuler[%] |
Here is the split form again:
✕
Subscript[W, 1][1, 5] |
The split form can be written compactly and compared to Wichmann’s recipe with :
✕
TraditionalForm@ Grid[{HoldForm[#1^#2] & @@@ First@*Tally /@ Split@Differences[ruler50], HoldForm[#1^#2] & @@@ Transpose[originalwichmannrecipe]}, Frame -> All] |
This Wichmann ruler is one of an infinite list of Wichmann rulers. The length-57 sparse rulers show two examples for :
✕
Text@Grid[{{"length", "marks", "recipe", Style["r", Italic], Style["s", Italic]}, {50, 12, "\!\(\*SuperscriptBox[\(1\), \(1\)]\) \!\(\*SuperscriptBox[\(2\), \ \(1\)]\) \!\(\*SuperscriptBox[\(3\), \(1\)]\) \ \!\(\*SuperscriptBox[\(7\), \(5\)]\) \!\(\*SuperscriptBox[\(4\), \ \(2\)]\) \!\(\*SuperscriptBox[\(1\), \(1\)]\)", 1, 5}, {57, 13, "\!\(\*SuperscriptBox[\(1\), \(1\)]\) \!\(\*SuperscriptBox[\(2\), \ \(1\)]\) \!\(\*SuperscriptBox[\(3\), \(1\)]\) \ \!\(\*SuperscriptBox[\(7\), \(6\)]\) \!\(\*SuperscriptBox[\(4\), \ \(2\)]\) \!\(\*SuperscriptBox[\(1\), \(1\)]\)", 1, 6}, {57, 13, "\!\(\*SuperscriptBox[\(1\), \(2\)]\) \!\(\*SuperscriptBox[\(3\), \ \(1\)]\) \!\(\*SuperscriptBox[\(5\), \(2\)]\) \ \!\(\*SuperscriptBox[\(11\), \(2\)]\) \!\(\*SuperscriptBox[\(6\), \(3\ \)]\) \!\(\*SuperscriptBox[\(1\), \(2\)]\)", 2, 2}, {90, 16, "\!\(\*SuperscriptBox[\(1\), \(2\)]\) \!\(\*SuperscriptBox[\(3\), \ \(1\)]\) \!\(\*SuperscriptBox[\(5\), \(2\)]\) \ \!\(\*SuperscriptBox[\(11\), \(5\)]\) \!\(\*SuperscriptBox[\(6\), \(3\ \)]\) \!\(\*SuperscriptBox[\(1\), \(2\)]\)", 2, 5}, {93, 17, "\!\(\*SuperscriptBox[\(1\), \(3\)]\) \!\(\*SuperscriptBox[\(4\), \ \(1\)]\) \!\(\*SuperscriptBox[\(7\), \(3\)]\) \ \!\(\*SuperscriptBox[\(15\), \(2\)]\) \!\(\*SuperscriptBox[\(8\), \(4\ \)]\) \!\(\*SuperscriptBox[\(1\), \(3\)]\)", 3, 2}, {101, 17, "\!\(\*SuperscriptBox[\(1\), \(2\)]\) \!\(\*SuperscriptBox[\(3\), \ \(1\)]\) \!\(\*SuperscriptBox[\(5\), \(2\)]\) \ \!\(\*SuperscriptBox[\(11\), \(6\)]\) \!\(\*SuperscriptBox[\(6\), \(3\ \)]\) \!\(\*SuperscriptBox[\(1\), \(2\)]\)", 2, 6}, {108, 18, "\!\(\*SuperscriptBox[\(1\), \(3\)]\) \!\(\*SuperscriptBox[\(4\), \ \(1\)]\) \!\(\*SuperscriptBox[\(7\), \(3\)]\) \ \!\(\*SuperscriptBox[\(15\), \(3\)]\) \!\(\*SuperscriptBox[\(8\), \(4\ \)]\) \!\(\*SuperscriptBox[\(1\), \(3\)]\)", 3, 3}, {112, 18, "\!\(\*SuperscriptBox[\(1\), \(2\)]\) \!\(\*SuperscriptBox[\(3\), \ \(1\)]\) \!\(\*SuperscriptBox[\(5\), \(2\)]\) \ \!\(\*SuperscriptBox[\(11\), \(7\)]\) \!\(\*SuperscriptBox[\(6\), \(3\ \)]\) \!\(\*SuperscriptBox[\(1\), \(2\)]\)", 2, 7}, {123, 19, "\!\(\*SuperscriptBox[\(1\), \(2\)]\) \!\(\*SuperscriptBox[\(3\), \ \(1\)]\) \!\(\*SuperscriptBox[\(5\), \(2\)]\) \ \!\(\*SuperscriptBox[\(11\), \(8\)]\) \!\(\*SuperscriptBox[\(6\), \(3\ \)]\) \!\(\*SuperscriptBox[\(1\), \(2\)]\)", 2, 8}, {123, 19, "\!\(\*SuperscriptBox[\(1\), \(3\)]\) \!\(\*SuperscriptBox[\(4\), \ \(1\)]\) \!\(\*SuperscriptBox[\(7\), \(3\)]\) \ \!\(\*SuperscriptBox[\(15\), \(4\)]\) \!\(\*SuperscriptBox[\(8\), \(4\ \)]\) \!\(\*SuperscriptBox[\(1\), \(3\)]\)", 3, 4}, {138, 20, "\!\(\*SuperscriptBox[\(1\), \(3\)]\) \!\(\*SuperscriptBox[\(4\), \ \(1\)]\) \!\(\*SuperscriptBox[\(7\), \(3\)]\) \ \!\(\*SuperscriptBox[\(15\), \(5\)]\) \!\(\*SuperscriptBox[\(8\), \(4\ \)]\) \!\(\*SuperscriptBox[\(1\), \(3\)]\)", 3, 5}, {153, 21, "\!\(\*SuperscriptBox[\(1\), \(3\)]\) \!\(\*SuperscriptBox[\(4\), \ \(1\)]\) \!\(\*SuperscriptBox[\(7\), \(3\)]\) \ \!\(\*SuperscriptBox[\(15\), \(6\)]\) \!\(\*SuperscriptBox[\(8\), \(4\ \)]\) \!\(\*SuperscriptBox[\(1\), \(3\)]\)", 3, 6}, {168, 22, "\!\(\*SuperscriptBox[\(1\), \(3\)]\) \!\(\*SuperscriptBox[\(4\), \ \(1\)]\) \!\(\*SuperscriptBox[\(7\), \(3\)]\) \ \!\(\*SuperscriptBox[\(15\), \(7\)]\) \!\(\*SuperscriptBox[\(8\), \(4\ \)]\) \!\(\*SuperscriptBox[\(1\), \(3\)]\)", 3, 7}, {183, 23, "\!\(\*SuperscriptBox[\(1\), \(3\)]\) \!\(\*SuperscriptBox[\(4\), \ \(1\)]\) \!\(\*SuperscriptBox[\(7\), \(3\)]\) \ \!\(\*SuperscriptBox[\(15\), \(8\)]\) \!\(\*SuperscriptBox[\(8\), \(4\ \)]\) \!\(\*SuperscriptBox[\(1\), \(3\)]\)", 3, 8}, {198, 24, "\!\(\*SuperscriptBox[\(1\), \(3\)]\) \!\(\*SuperscriptBox[\(4\), \ \(1\)]\) \!\(\*SuperscriptBox[\(7\), \(3\)]\) \ \!\(\*SuperscriptBox[\(15\), \(9\)]\) \!\(\*SuperscriptBox[\(8\), \(4\ \)]\) \!\(\*SuperscriptBox[\(1\), \(3\)]\)", 3, 9}, {213, 25, "\!\(\*SuperscriptBox[\(1\), \(4\)]\) \!\(\*SuperscriptBox[\(5\), \ \(1\)]\) \!\(\*SuperscriptBox[\(9\), \(4\)]\) \ \!\(\*SuperscriptBox[\(19\), \(6\)]\) \!\(\*SuperscriptBox[\(10\), \ \(5\)]\) \!\(\*SuperscriptBox[\(1\), \(4\)]\)", 4, 6}}] |
Next is the length-58 optimal ruler showing that . Using brute force, is provable. In 2011, Peter Luschny conjectured that the optimal ruler is the largest optimal ruler that does not use Wichmann’s recipe.
✕
Text@Grid[Transpose[{{"split", "diff", "ruler", "\!\(\* StyleBox[SubscriptBox[\"M\", \"n\"],\nFontSlant->\"Italic\"]\)"}, {#, SplitToDiff[#], SplitToRuler[#], Length[SplitToRuler[#]]} &@sparsedata[[58]]}], Frame -> All] |
In 2014, Arch D. Robison wrote “Parallel Computation of Sparse Rulers,” where months of computer time was spent on 256 Intel cores to calculate 106,535 sparse rulers up to length 213. Part of this run proved the existence of a length-135 nonperfect ruler.
So while we have identified all the sparse rulers up to length 213, we only have candidates beyond length 213. For the rest of this blog post, “conjectured sparse ruler” means a complete ruler with length greater than 213 and the minimal known number of marks. Above length 213, no sparse rulers have been proven minimal. Length 214 has the first conjectured sparse ruler:
✕
Text@Grid[{{"minimal?", "length", "marks", "compact split form"}, {"proven", 213, 25, "\!\(\*SuperscriptBox[\(1\), \(4\)]\) \!\(\*SuperscriptBox[\(5\), \ \(1\)]\) \!\(\*SuperscriptBox[\(9\), \(4\)]\) \ \!\(\*SuperscriptBox[\(19\), \(6\)]\) \!\(\*SuperscriptBox[\(10\), \ \(5\)]\) \!\(\*SuperscriptBox[\(1\), \(4\)]\)"}, {"conjectured", 214, 26, "\!\(\*SuperscriptBox[\(1\), \(5\)]\) \ \!\(\*SuperscriptBox[\(5\), \(1\)]\) \!\(\*SuperscriptBox[\(9\), \ \(4\)]\) \!\(\*SuperscriptBox[\(19\), \(6\)]\) \ \!\(\*SuperscriptBox[\(10\), \(5\)]\) \!\(\*SuperscriptBox[\(1\), \(4\ \)]\)"}}, Frame -> All] |
Robison’s run required 1.5 computer years to verify . Computationally verifying would require 3 computer years using current methods. Adding a single mark doubles the computational difficulty of verifying minimality with currently known methods.
You may have heard of sparse rulers, Golomb rulers and difference sets. How do these relate to each other?
In 2019, I devised a formula that expresses the excess of a complete ruler in terms of the length and the number of minimal marks ; here, is the rounding function:
.
For the first 50 lengths, . Then , so .
✕
{12 - Round[Sqrt[3 50 + 9/4]], 13 - Round[Sqrt[3 51 + 9/4]]} |
The excess formula produces the exact number of minimal marks for sparse rulers up to length 213, with two lines of code. In the On-Line Encyclopedia of Integer Sequences (OEIS), this list of the number of minimal marks for a sparse ruler is sequence A046693:
✕
A308766[n_] := If[MemberQ[{51, 59, 69, 113, 124, 125, 135, 136, 139, 149, 150, 151, 164, 165, 166, 179, 180, 181, 195, 196, 199, 209, 210, 211}, n], 1, 0]; A046693 = Table[Round[Sqrt[3 n + 9/4]] + A308766[n], {n, 213}] |
Based on the sparse rulers and conjectured sparse rulers to length 2020, the excess seems to be a chaotic sequence of 0s and 1s:
✕
ListPlot[Take[rulerexcess, 2020], Joined -> True, AspectRatio -> 1/30, Axes -> False, ImageSize -> 520] |
If Luschny’s conjecture is correct, then the lowest possible excess is 0 and all conjectured sparse rulers are minimal.
Without rounding, a plot of the best-known number of minimal marks minus shows some distinct patterns up to length 2020. Some points, such as seem to float above and break the pattern, which makes their minimality questionable:
✕
unroundedexcess = Table[{n, Round[Sqrt[3 n + 9/4]] + rulerexcess[[n]] - Sqrt[3 n + 9/4]}, {n, 1, 2020}]; ListPlot[unroundedexcess, AspectRatio -> 1/4, ImageSize -> {520, 130}] |
Here are lengths of currently conjectured sparse rulers that break the pattern:
✕
First /@ Select[unroundedexcess, #[[2]] > 1 &] |
Here is a plot of the verified number of minimal marks to :
✕
ListPlot[A046693, AspectRatio -> 1/4, ImageSize -> 520] |
Robison discovered that the sequence is not strictly increasing, as seen by the dips. Where do these dips occur?
✕
Flatten[Position[Differences[A046693], -1]] |
How are they spaced?
✕
Differences[%] |
In the previous table, the last six listed Wichmann rulers had these lengths:
✕
{138, 153, 168, 183, 198, 213}; |
These coincide with the positions of the dips:
✕
{136, 151, 166, 181, 196, 211} + 2 |
We can plot and compare Leech’s bounds for the number of minimal marks to the actual number of minimal marks:
✕
ListPlot[{Table[Sqrt[2.434 n], {n, 1, 213}], Table[A046693[[n]], {n, 1, 213}], Table[Sqrt[3.348 n], {n, 1, 213}]}, AspectRatio -> 1/5] |
The furthest values in the lines of dots are almost always lengths of optimal Wichmann rulers, with the last known exception being . We saw that some of the lengths of optimal Wichmann rulers were . Let us call these Wichmann values. These lengths (A289761) are given by:
✕
WichmannValues = Table[(n^2 - (Mod[n, 6] - 3)^2)/3 + n, {n, 1, 24}] |
Here I arrange numbers to 213 so that the bottom of each column is a Wichmann value. Under the blue line is the number of marks associated with the column. This is a numeric representation of the excess pattern:
values are gray.
values are bold black.
✕
Grid[Append[ Transpose[Table[PadLeft[Take[Style[If[rulerexcess[[#]] == 1, Style[#, Black, Bold, 16], Style[#, Gray, 14]]] & /@ Range[213], {WichmannValues[[n]] + 1, WichmannValues[[n + 1]]}], 15, ""], {n, 24 - 1}]], Range[3, 25]], Spacings -> {.2, .2}, Dividers -> {False, -2 -> Blue}] |
For convenience, I’ll use various terms relating to the excess pattern:
A few sample values for rulers in columns 19 and 25:
✕
Text@Grid[ Transpose[ Prepend[Flatten[{#, ExcessCoordinates[#]}] & /@ {114, 116, 120, 122, 200, 202, 204, 206, 208, 210, 212, 213}, {"length", "column", "height", "fraction", "rise"}]], Frame -> All] |
Here is the excess pattern of the best-known excess values for lengths up to 10501. is gray, is black. This is a pixel representation of the excess pattern:
✕
ArrayPlot[Transpose[Table[ PadLeft[ First /@ Take[Transpose[{Take[rulerexcess, 10501], Range[10501]}], {WichmannValues[[n]] + 1, WichmannValues[[n + 1]]}], 119, 2], {n, 1, 175}]], ColorRules -> {0 -> LightGray, 1 -> Black, 2 -> White}, PixelConstrained -> 3, Frame -> False] |
The creator of OEIS, N. J. A. Sloane, describes this pattern as “Dark Satanic Mills on a Cloudy Day.” This unique description refers to the solid black part of the pattern with many windows, dark mills and the irregular patches above, the clouds.
We calculate these coordinates for various lengths:
✕
coordinated = Table[ xy = ExcessCoordinates[n]; col = Switch[xy[[2]], 1/ 2, {RGBColor[0, 1, 1], RGBColor[0.5, 0, 0.5]}, 1/ 4, {RGBColor[1, 1, 0], RGBColor[0, 1, 0]}, 3/ 4, {RGBColor[1, 0, 0], RGBColor[0, 0, 1]}, _, {GrayLevel[0.9], GrayLevel[0]} ]; {col[[rulerexcess[[n]] + 1]], xy[[1]], {xy[[1, 1]]/120, 1 - xy[[2]]}, xy[[3]]}, {n, 1, 10501}]; |
Here is the excess pattern of the best-known excess values for lengths up to 10501.
.
Some colors have exact excess fractions: :
✕
Row[{Graphics[{{#[[1]], Rectangle[#[[2]]]} & /@ coordinated}, ImageSize -> {480, 318}, PlotRange -> {{3, 159}, {0, 106}}]}] |
This plots the excess fractions. The excess fractions , and occur in each column; they are the colored horizontal lines:
✕
Graphics[{{#[[1]], Point[#[[3]]]} & /@ coordinated}, ImageSize -> {480, 318}] |
Here’s a version of the excess pattern where the excess fraction 1/2 makes a horizontal line. A crushed version of the normalized excess fraction pattern is shown on the right.
Some colors have exact excess fractions as before: :
✕
Row[{Graphics[{{#[[1]], Rectangle[#[[2]] + {0, 53 - Round[#[[4]]/2]}]} & /@ coordinated}, ImageSize -> {480, 318}, PlotRange -> {{3, 159}, {0, 106}}], Graphics[{AbsolutePointSize[.1], {#[[1]], Point[#[[3]]/{10, 1}]} & /@ coordinated}, ImageSize -> {Automatic, 320}]}] |
For the following diagram, on the left are columns C_{68} to C_{73} with cells representing lengths 1516 to 1797.
On the right is the normalized excess fraction:
✕
farey = Select[FareySequence[24], MemberQ[{1, 2, 3, 4, 6, 8, 12, 24}, Denominator[#]] &]; took = Take[coordinated, 8479]; Row[{Graphics[{EdgeForm[Black], Table[ Tooltip[{coordinated[[k, 1]], Rectangle[coordinated[[k, 2]]]}, {k, coordinated[[k, 2]], sparsedata[[k]]}], {k, 1516, 1797}], Arrowheads[Medium], Arrow[{{73, 46} + {5, 1/2}, {73, 46} + {1/2, 1/2}}], Text[Row[{"Top of column ", Subscript[Style["C", Italic], "73"], " with 73 marks, length 1751"}], {73, 46} + {5, 1/2}, {Left, Center}], Arrow[{{73, 0} + {5, 1/2}, {73, 0} + {1/2, 1/2}}], Text[Row[{"Bottom of column ", Subscript[Style["C", Italic], "73"], " with 73 marks, length 1797"}], {73, 0} + {5, 1/2}, {Left, Center}], Arrow[{{71, 17} + {5, 1/2}, {71, 17} + {1/2, 1/2}}], Text[Row[{"Window in column ", Subscript[Style["C", Italic], "71"], ", 1686 and 1687"}], {71, 17} + {5, 1/2}, {Left, Center}], Line[{{73, 24} + {3, 1/2}, {73, 24} + {5, 1/2}}], Arrow[{{73, 24} + {3, 1/2}, {73, 27} + {1/2, 1/2}}], Arrow[{{73, 24} + {3, 1/2}, {73, 21} + {1/2, 1/2}}], Text[Row[{"Window in column ", Subscript[Style["C", Italic], "73"], ", 1770 to 1776"}], {73, 24} + {5, 1/2}, {Left, Center}], Line[{{73, 12} + {3, 1/2}, {73, 12} + {5, 1/2}}], Arrow[{{73, 12} + {3, 1/2}, {73, 20} + {1/2, 1/2}}], Arrow[{{73, 12} + {3, 1/2}, {73, 2} + {1/2, 1/2}}], Text[Row[{"Mullion in column ", Subscript[Style["C", Italic], "73"], ", 1777 to 1795"}], {73, 12} + {5, 1/2}, {Left, Center}] }, ImageSize -> {380, 420}], Graphics[{AbsolutePointSize[.01], {#[[1]], Point[#[[3]]/{10, 1}]} & /@ coordinated, Style[ Text[Row[{Numerator[#], "/", Denominator[#]}], {-.03, 1 - #}], 10] & /@ farey}, AspectRatio -> 4, ImageSize -> {115, 420}]}, Alignment -> {Bottom, Bottom}] |
In the normalized excess pattern:
Various sequences from OEIS:
A004137: maximal number of edges in a graceful graph on nodes
A046693: minimal marks for a sparse ruler of length
A103300: number of perfect rulers with length
A289761: maximum length of an optimal Wichmann ruler with marks
A308766: lengths of sparse rulers with excess 1
A309407: round(sqrt(3* + 9/4))
A326499: excess of a length- sparse ruler
You can also check out the “Sparse Rulers” Demonstration, which has thousands of these sparse rulers:
Producing two million sparse rulers required over two thousand Wichmann-like rulers, construction recipes that all work with arbitrarily large values. Substituting and values into a Wichmann recipe is computationally easy:
The excess of a length- sparse ruler with minimal number of marks is .
Sparse ruler conjecture: E = 0 or 1 for all sparse rulers.
Finding sparse rulers satisfying for all lengths under 257992 is difficult and likely couldn’t have been done without current-era computers. Finding longer-length sparse rulers turns out to be easy and could have been done back in 1963 with the following simple proof.
is the split form of Wichmann recipe 1, or .
is , , : or in the diff form.
is an extension. A sparse ruler starting with 1s in the diff form can be extended by up to 1s with an extra mark at the end. This new ruler looks like . The new lengths above are handled by differences , and . Note that is not a sparse ruler since the length cannot be expressed as a difference.
The “Wichmann Columns” Demonstration generates a column in the excess pattern by using only sparse rulers made by the first two Wichmann recipes, W_{1} and W_{2}, and extensions of these rulers.
indicates that a sparse ruler cannot be generated by W_{1}, W_{2} or by extending them.
indicates a generated sparse ruler with excess 0.
indicates a generated sparse ruler with excess 1.
We can see in the following Manipulate that length cannot be covered by this method in the excess pattern column representing sparse rulers with 236 marks. Adjust the slider or hover over a value to get a Tooltip with the generated sparse ruler:
Red pixels show where extensions don’t solve in the excess pattern:
✕
pixels = Table[ Reverse[PadRight[Switch[#[[2]], RGBColor[0, 0, 1], 1, RGBColor[0, Rational[2, 3], 0], 2, RGBColor[1, 0, 0], 3] & /@ Reverse[First /@ WichmannColumn[k][[1, 1, 1]]], 600]], {k, 2, 895}]; ArrayPlot[Transpose[Drop[pixels, 363]], PixelConstrained -> 1, Frame -> False, ColorRules -> {0 -> White, 1 -> LightGray, 2 -> Gray, 3 -> Red}] |
Lengths of sparse rulers generated by and are generated by order-2 polynomials differing by 1. The behavior of values generated by these polynomials is completely predictable and ultimately generates two weird sequences: sixsev and sixfiv:
✕
Text@Grid[Prepend[{Subscript[Style["W", Italic], #], WichmannLength[WichmannRecipes[[#]]], WichmannMarks[WichmannRecipes[[#]]]} & /@ {1, 2}, {"recipe", "length", "marks"}], Frame -> All] |
Sequence sixsev consists of infinite 6s and 7s. Similarly, the sequence sixfiv consists entirely of 6s and 5s:
✕
cutoff = 15; (*raise the cutoff to go farther*) sixsev = Drop[ Flatten[Table[Table[{Table[6, {n}], 7}, {6}], {n, 0, cutoff}]], 1]; sixfiv = Drop[ Flatten[Table[Table[{Table[6, {n}], 5}, {6}], {n, 0, cutoff}]], 2]; |
✕
Column[{Take[sixsev, 80], Take[sixfiv, 80]}] |
What are the values for the and recipe in column 236 with ? What are the Wichmann recipe column zeros (WRCZ)? Code for WRCZ, based on sixsev and sixfiv, is shown in the initialization section in the downloadable notebook. Column 236 in the excess pattern has seven sets of values. The height of a column is roughly (2/3)*column, 159 in this case. The average possible extension is roughly a quarter of the column height:
✕
WRCZ[236] |
In the excess pattern, each column divides into quarter sections with the same size as the extension lengths of and . If we can show that eventually there are at least four reasonably spaced and zeros in each column, we’re done:
✕
Row[{Graphics[{{#[[1]], Rectangle[#[[2]]]} & /@ coordinated}, ImageSize -> {480, 318}, PlotRange -> {{3, 159}, {0, 106}}]}] |
The last column in the excess pattern without four reasonably spaced and zeros is column 880:
✕
WRCZ[880] |
Here are the lengths generated by these pairs:
✕
zero880 = (3 + 8 r + 4 r^2 + 3 s + 4 r s) /. {r -> #[[1]], s -> #[[2]]} & /@ WRCZ[880] |
Notice how the generated lengths for this column are palindromic, a worst-case scenario. Length 257992 isn’t covered by the zeros here and is out of reach of the last zero in the previous column, .
The acceleration of change between values generated by is a constant –24. The spacing between zeros is predictable:
✕
Differences[Differences[zero880]] |
Only four reasonably spaced zeros are needed per column. The polynomial inexorably offers more and more zeros. Column 880 is the last column where extensions can fail:
✕
ListPlot[Table[Length[WRCZ[n]], {n, 50, 3000}]] |
Another plot showing that extensions overwhelm the differences:
✕
ListPlot[Table[(WRCZ[k][[1, 1]] + 2) - Max[ Differences[ Union[3 + 8 r + 4 r^2 + 3 s + 4 r s /. Thread[{r, s} -> #] & /@ WRCZ[k]]]] , {k, 50, 2050}], Joined -> True] |
All integer lengths greater than 257992 (corresponding to 880 marks) are excess-01 rulers made by extensions to Wichmann recipe 1.
All integer lengths greater than 119206 (corresponding to 598 marks) are excess-01 rulers made by extensions and double extensions to Wichmann recipe 1. Here’s an example double extension that covers length 257992:
✕
SplitExtensions[ Last[SplitExtensions[ WichmannRuler[WichmannRecipes[[1]], {146, 292}]]]][[6]] |
✕
Dot @@ % |
We can programmatically verify the conjecture with precalculated rulers to length 2020, or to length 257992 with more running time. This tallies the number of 0s and 1s for the excess, up to length 2020:
✕
Tally[Sparseness[SplitToRuler[#]] & /@ Take[sparsedata, 2020]] |
I knew Robison found rulers to length 213, so I wanted to show samples. But except for the counts, all the ruler data was lost. I rebuilt it, but without access to the Intel superclusters. This search started with trying to make an image showing a row of column-presented sparse rulers from length 1 to length 213.
First, here are sparse rulers up to length 36 with the mark positions converted into pixel positions. The gray rows indicate that the sparse ruler for that length is unique:
✕
Row[{Style[ Column[Table[SplitToRuler[sparsedata[[n]]], {n, 1, 36}], Alignment -> Right], 8], ArrayPlot[Table[PadRight[ReplacePart[Table[0, {n + 1}], ({# + 1} & /@ SplitToRuler[sparsedata[[n]]]) -> 1], 37] + If[counts[[n]] == 1, 2, 0], {n, 1, 36}], PixelConstrained -> 11, ColorRules -> {0 -> White, 1 -> Black, 2 -> GrayLevel[.9], 3 -> Black }, Frame -> False]}] |
The following plot is a transpose of the previous plot, extended to a length of 213. Each column represents a sparse ruler, with gray columns indicating uniqueness. These columns line up with the log plot after the next paragraph:
✕
Row[{Spacer[20], ArrayPlot[ Transpose[Reverse /@ Table[PadRight[ReplacePart[Table[0, {n + 1}], ({# + 1} & /@ SplitToRuler[sparsedata[[n]]]) -> 1], 215] + If[counts[[n]] == 1, 2, 0], {n, 1, 213}]], PixelConstrained -> 2, ColorRules -> {0 -> White, 1 -> Black, 2 -> GrayLevel[.9], 3 -> Black }, Frame -> False]}] |
And here is a log plot of the number of distinct sparse rulers of length to length 213, which shows that there are usually fewer (blue) rulers and more (brown) rulers. Points on the bottom correspond to unique rulers (and a gray column in the previous image):
✕
ListPlot[Take[#, 2] & /@ # & /@ GatherBy[Transpose[{Range[213], Log /@ Take[counts, 213], Take[rulerexcess, 213]}], Last], ImageSize -> 450] |
Length has 15990 distinct sparse rulers. These counts are sequence A103300. Out of the first 213 lengths, 31 of them have a unique sparse ruler. I suspect many lengths above 213 have unique or hard-to-find minimal representations.
The following log plot shows the number of distinct sparse rulers and conjectured sparse rulers of length to length 10501, found in the search that produced 2,016,735 sparse rulers and conjectured sparse rulers:
✕
ListPlot[Take[#, 2] & /@ # & /@ GatherBy[Transpose[{Range[10501], Log /@ Take[counts, 10501], Take[rulerexcess, 10501]}], Last]] |
In the downloadable notebook I show many ways to use a sparse ruler to generate new sparse rulers, which can in turn make more sparse rulers. I call this process recursion. Processing shorter-length rulers gave better results and needed less time, so rulers of a length above 4000 were initially not used to produce more rulers. After cracking the particularly hard length 1792, I extended the new ruler processing to length 7000 in hopes of finding an example of length 5657. After checking to 10501, I temporarily stopped the search.
Various regularities and patterns can be seen, but part of the change in pattern is due to arbitrary cutoffs in processing at 4000 and 7000. One curious case is , with 363 rulers. Nearby is , with 3619 rulers. If an sparse ruler exists, the first clue will likely be an length with an unusually high count of examples.
An infinite number of complete rulers with can be made using all 2069 Wichmann-like recipes. How well does the catalog of Wichmann recipes work? To find out, I tried the following overnight run:
✕
addedrulers=Table[With[{wich=FindWichmann[hh][[1,1]]}, WichmannRuler[WichmannRecipes[[wich[[1]]]], wich[[3]]]], {hh,10520,17553}] |
How do these 7033 new complete rulers match up with the pattern? About 6448 rulers match the previous pattern well. About 587 rulers appear to be violations:
✕
ArrayPlot[ Transpose[Drop[Table[PadLeft[Take[Take[oldrulerexcess, 17553], {WichmannValues[[n]] + 1, WichmannValues[[n + 1]]}], 151, 6], {n, 1, 227}], 92]], ColorRules -> {0 -> LightGray, 1 -> Black, 6 -> White, 2 -> Green, 3 -> Brown, 4 -> Red, 5 -> Yellow }, PixelConstrained -> 4, Frame -> False] |
Adding an extension is the simplest way to make new complete rulers. Let us try that. This code (which will also require a long running time) finds 586 lengths that can be improved this simple way:
✕
oldsparsedata=CloudGet["https://wolfr.am/KeKbjOBs"]; rulerexcess = oldrulerexcess; newrulers = First /@ SplitBy[ Sort[{Last[#], Length[#], RulerToSplit[#]} & /@ Complement[Union[SparseCheckImprove /@ Flatten[ Table[With[{ruler = SplitToRuler[#]}, Append[ruler, Last[ruler] + n]], {n, 1, 40}] & /@ Drop[oldsparsedata, 10501], 1]], {False}]], First]; Do[length = newrulers[[index, 1]]; rulerexcess[[length]] = newrulers[[index, 2]] - Round[Sqrt[3 length + 9/4]]; sparsedata[[length]] = newrulers[[index, 3]], {index, 1, Length[newrulers]}]; |
After that trial, a single exception to the sparse ruler conjecture remains in this range, at . The pattern cleaned up nicely:
✕
ArrayPlot[ Transpose[ Drop[Table[ PadLeft[Take[ReplacePart[Take[rulerexcess, 17553], 16617 -> 2], {WichmannValues[[n]] + 1, WichmannValues[[n + 1]]}], 151, 6], {n, 1, 227}], 92]], ColorRules -> {0 -> LightGray, 1 -> Black, 6 -> White, 2 -> Green, 3 -> Brown, 4 -> Red, 5 -> Yellow }, PixelConstrained -> 4, Frame -> False] |
I did not expect this trial to work so well.
I had to find an example for . The tools in this notebook gave me an example in a few hours. The sparse ruler conjecture is true to at least 17553:
✕
sparse16617 = {{1, 75, 1, 75, 149, 74, 42, 1, 19, 1}, {32, 1, 4, 37, 73, 37, 1, 32, 2, 4}}; temp = SplitToRuler[sparse16617]; {Last[temp], Length[temp], Sparseness[temp]} |
Length 16617 is the final difficult value for . All lengths 16618 to 257992 can be solved with the 2069 known Wichmann recipes or extensions.
After the initialization section in the notebook is ReasonableRuler, which will find a ruler with excess 0 or 1 for any given positive integer length. Here’s a ruler for length 100000:
✕
ReasonableRuler[100000] |
The function generates example sparse rulers with E = 0 or 1 for all lengths up to 257992 within a few minutes.
QED.
The Leech bounds can be improved. The Leech upper and lower bounds drift far away from the best-known values for minimal marks :
✕
ListPlot[{Table[Sqrt[2.434 n] - (Sqrt[3 n + 9/4]), {n, 1, 17553}], Table[rulerexcess[[n]] + Round[Sqrt[3 n + 9/4]] - Sqrt[3 n + 9/4], {n, 1, 17553}], Table[Sqrt[3.348 n] - (Sqrt[3 n + 9/4]), {n, 1, 17553}]}, AspectRatio -> 1/5] |
I know of 11 rulers with the following properties:
✕
highzero = {{{1, 3, 2, 8, 17, 1, 9, 1}, {3, 1, 1, 3, 9, 1, 4, 3}}, {{1, 4, 8, 17, 9, 6, 3, 9, 1}, {4, 1, 2, 9, 1, 1, 1, 3, 3}}, {{1, 2, 7, 9, 1, 9, 17, 8, 5, 1}, {3, 1, 1, 2, 1, 1, 9, 3, 1, 3}}, {{1, 3, 5, 3, 8, 17, 9, 2, 7, 2, 9, 1}, {2, 2, 1, 1, 2, 9, 1, 1, 1, 1, 2, 2}}, {{1, 3, 10, 21, 11, 1, 11, 1}, {4, 2, 4, 10, 1, 1, 4, 4}}, {{1, 2, 9, 11, 1, 11, 21, 10, 6, 1}, {4, 1, 1, 2, 1, 2, 10, 4, 1, 4}}, {{1, 3, 10, 21, 11, 1, 11, 1}, {4, 2, 4, 11, 1, 1, 4, 4}}, {{1, 2, 9, 11, 1, 11, 21, 10, 6, 1}, {4, 1, 1, 2, 1, 2, 11, 4, 1, 4}}, {{1, 3, 4, 12, 25, 13, 1, 13, 1}, {5, 1, 1, 5, 12, 2, 1, 4, 5}}, {{1, 3, 4, 12, 25, 13, 1, 13, 1}, {5, 1, 1, 5, 13, 2, 1, 4, 5}}, {{1, 3, 5, 14, 29, 15, 1, 15, 1}, {6, 1, 1, 6, 14, 3, 1, 4, 6}}}; Text@Grid[Prepend[{Dot @@ #, Row[{ToString[Numerator[#]], "/", ToString[Denominator[#]]}] &@ ExcessCoordinates[Dot @@ #][[2]], #} & /@ highzero, {"length", "excess fraction", "ruler"}], Frame -> All] |
I’ve shown how approaching the problem computationally with the Wolfram Language can help not only to solve but also construct a proof for the sparse rulers problem that has historically fascinated so many. Make your own mark—to continue exploring, be sure to download this post’s notebook, which features lots of additional code, the connection between sparse rulers and graceful graphs, and a longer discussion for finding sparse rulers. Can any of the current excess values be improved? Are there more excellent Wichmann-like recipes? I would love to know—submit your recipes in the comments or to Wolfram Community!
Many thanks to T. Sirgedas, A. Robison, G. Beck and N. J. A. Sloane for help with this search.
Leech, J. “On the Representation of 1, 2, …, n by Differences.” Journal of the London Mathematical
Society s1–31.2 (1956): 160–169.
Luschny, P. “The Optimal Ruler Conjecture.” The On-Line Encyclopedia of Integer Sequences.
Pegg, E. “Sparse Ruler Conjecture.” Wolfram Community.
Rédei, L. and A. Rényi. “On the Representation of the Numbers 1, 2, …, n by Means of Differences.”
Matematicheskii Sbornik 24(66), no. 3 (1949): 385–389.
Robison, A. D. “Parallel Computation of Sparse Rulers.” Intel Developer Zone.
Rokicki, T. and G. Dogon. “Golomb Rulers: Pushing the Limits.” cube20.org.
Wichmann, B. “A Note on Restricted Difference Bases.” Journal of the London Mathematical Society
s1–38.1 (1963): 465–466. doi:10.1112/jlms/s1-38.1.465.
Wikipedia Contributors. “Sparse Ruler.” Wikipedia, the Free Encyclopedia.
]]>
Before I give strict definitions, here is the intuitive version of an integer partition via an example: . However, don’t add up! Just think of the sum 14 as being broken up into the four parts: 7, 3, 3, 1. Now the standard additive question is, how many ways are there of breaking 14 into parts? In other words, how many partitions of 14 are there? As we often say, the Wolfram Language has a function for that:
✕
PartitionsP[14] |
I’ll explain the pieces of the problem at hand as we go along; consider this a succinct abstract:
Two infinite lower-triangular matrices related to integer partitions are inverses of each other. The matrix ν comes from an additive analogue of the multiplicative Möbius μ function, while γ comes from counting generalized complete partitions; a complete partition of n has all possible subsums 1 to n.
First I’ll set up the function definitions we will use.
✕
Ferrers@p_ := Framed@Grid[Table["\[FilledCircle]", #] & /@ p] |
✕
ConjugatePartition[l_List] := Module[{i, r = Reverse[l], n = Length[l]}, Table[n + 1 - Position[r, _?(# >= i &), Infinity, 1][[1, 1]], {i, l[[1]]}]] |
✕
DistinctPartitionQ@x_ := Length@x == Length@Union@x |
✕
DistinctPartitions@n_ := Select[IntegerPartitions@n, DistinctPartitionQ] |
✕
PartitionMu@\[Lambda]_ := If[DistinctPartitionQ@\[Lambda], (-1)^Length@\[Lambda], 0] |
✕
DistinctPartitionsByMax[n_, m_] := \!\(TraditionalForm\`DistinctPartitionsByMax\)[n, m] = Select[IntegerPartitions@n, (Sort@# == Union@#) && (Max@# == m) &] |
✕
PartitionsMuByMax[n_, m_] := PartitionsMuByMax[n, m] = Length@DistinctPartitionsByMax[n, m] - 2 Length@Select[DistinctPartitionsByMax[n, m], EvenQ@*Length] |
✕
PartitionsMuByMax@r_ := Table[PartitionsMuByMax[i, j], {i, r}, {j, i}] |
✕
\[Nu]@r_ := PadRight@Table[PartitionsMuByMax[i, j], {i, r}, {j, i}] |
✕
KStepPartitionQ[\[Lambda]_, k_] := MemberQ[Range@k, Last@\[Lambda]] && And @@ Table[\[Lambda][[j]] - k <= Total@Drop[\[Lambda], First@Last@Position[\[Lambda], \[Lambda][[j]]] ], {j, -1 + Length@\[Lambda]}] |
✕
KStepPartitionQ[0, _] := {{0}} |
✕
KStepPartitions[n_, k_] := Select[IntegerPartitions@n, KStepPartitionQ[#, k] &] |
✕
KStep[0, k_] := 1 |
✕
KStep[n_, k_] := KStep[n, k] = Length@KStepPartitions[n, k] |
✕
CompletePartitionQ@p_ := KStepPartitionQ[p, 1] |
✕
CompletePartitions[n_] := KStepPartitions[n, 1] |
✕
Complete[n_] := KStep[n, 1] |
✕
pre\[Gamma]@r_ := Table[KStep[i - 1, j - 1], {i, r}, {j, r}] |
✕
\[Gamma]@r_ := PadRight@Table[KStep[i - j, j - 1], {i, r}, {j, i}] |
✕
StrictCompositions[n_] := Join @@ Permutations /@ IntegerPartitions[n] |
✕
StrictCompositionsByMax[n_, m_] := Total[-(-1)^(Length /@ Select[StrictCompositions@n, Max@# == m &])] |
✕
\[Sigma]@r_ := PadRight@Table[StrictCompositionsByMax[n, m], {n, r}, {m, n}] |
✕
\[Alpha]@r_ := PadRight@Table[1, {i, r}, {j, i}] |
✕
\[Chi]@r_ := PadRight@Table[If[Mod[n, k] == 0, MoebiusMu[n/k], 0], {n, r}, {k, n}] |
Let’s establish the definitions for a multiset and an integer partition:
For example, .
In Mathematica, we use a list:
✕
{3, 1, 1} // Total |
Since the elements of a multiset and a set are unordered, we can arbitrarily choose to order the parts of a partition from largest to smallest. Here are the integer partitions of 5:
✕
IntegerPartitions[5] |
Here they are again more compactly:
✕
Row /@ IntegerPartitions@5 |
An older alternative definition is along these lines: “A partition is a way of writing an integer n as a sum of positive integers where the order of the addends is not significant…. By convention, partitions are normally written from largest to smallest addends… for example, 10 = 3 + 2 + 2 + 2 + 1.”
With such a definition, 3 + 2 + 2 + 2 + 1 has to be frozen, because as an arithmetic expression it is 10 and the parts are gone.
Yet another definition: is a partition of if the finite sequence is such that and .
For each part of a partition , draw a row of dots, then stack the rows:
✕
Ferrers@{2, 1, 1} |
The conjugate partition of a partition is the partition corresponding to the transpose of the Ferrers diagram of :
✕
Ferrers@ConjugatePartition@{2, 1, 1} |
So is the conjugate partition of , and vice versa.
A distinct partition has no repeated part. Here are the four distinct partitions of 6:
✕
Row /@ DistinctPartitions@6 |
The remaining partitions of 6 have repeated parts:
✕
Row /@ Complement[IntegerPartitions@6, DistinctPartitions@6] |
This is the sequence counting the number of distinct partitions of :
✕
PartitionsQ@Range[20] |
The number of partitions of is but the next number is not 13:
✕
PartitionsP@Range[12] |
The generating function for this sequence is:
✕
Row[{Sum[PartitionsP@n x^n, {n, 12}], " + \[Ellipsis]"}] |
The generating function is equal to the infinite product .
The number of distinct partitions of :
✕
PartitionsQ@Range[12] |
The generating function for this sequence is:
✕
Row[{Sum[PartitionsQ@n x^n, {n, 12}], " + \[Ellipsis]"}] |
It is equal to the infinite product .
A square-free integer is one that is not divisible by a square greater than 1. Here are the square-free numbers up to 100:
✕
Select[Range@100, SquareFreeQ] |
Here are numbers up to 100 that are not square free:
✕
Select[Range@100, Not@*SquareFreeQ] |
In multiplicative number theory, the Möbius μ function is defined on the positive integers as follows:
In other words, of a square-free integer is or according to whether has an odd or an even number of prime factors. For example, , , .
The function is the partition analogue of the ordinary Möbius function :
✕
Text@Grid[{ {"\[Mu]", "\!\(\*SubscriptBox[\(\[Mu]\), \(P\)]\)"}, {, }, {"product", "partition"}, {"primes factors", "parts"}, {"square\[Hyphen]free", "distinct"} }, Alignment -> Left, Dividers -> {{False, True}, {False, True}}] |
The definition of :
Here are the partitions of 6 and the corresponding values of the Möbius partition function :
✕
Grid[{Row@#, PartitionMu@#} & /@ IntegerPartitions@6, Alignment -> {Right, Left}] |
The prime example of an infinite lower-triangular matrix is Pascal’s triangle . Imagine that the rows keep going down and the columns keep going to the right. For readability, let’s replace 0s with dots:
✕
MatrixForm[t10 = Table[Binomial[n, k], {n, 0, 9}, {k, 0, 9}], TableAlignments -> Right] /. 0 -> "\[CenterDot]" |
Here is the matrix product :
✕
MatrixForm[t10.t10, TableAlignments -> Right] /. 0 -> "\[CenterDot]" |
Here is the matrix inverse of :
✕
MatrixForm[Inverse@t10, TableAlignments -> Right] /. 0 -> "\[CenterDot]" |
The Stirling numbers of the first and second kind are another example of a pair of inverse lower-triangular matrices.
A Stirling number of the first kind counts how many permutations of have cycles:
✕
(s1 = Table[StirlingS1[n, k], {n, 8}, {k, 8}]) /. 0 -> "\[CenterDot]" // MatrixForm |
A set partition of a finite set, say , is a set of disjoint nonempty subsets of .
A Stirling number of the second kind counts how many set partitions of have subsets:
✕
(s2 = Table[StirlingS2[n, k], {n, 8}, {k, 8}]) /. 0 -> "\[CenterDot]" // MatrixForm |
The two matrices are inverses of each other:
✕
Row[{MatrixForm@s1, Style[" \[CenterDot] ", 24], MatrixForm@s2, Style[" = ", 24], MatrixForm[s1.s2]}] /. 0 -> "\[CenterDot]" |
For square matrices and , if , then . As the Demonstration The Derivative and the Integral as Infinite Matrices shows, there are (very familiar) infinite matrices and such that is the identity matrix, but .
Even though infinite lower-triangular matrices with 1s on the main diagonal behave well, we only deal with matrices, where .
Define the matrix by , where the sum is over and , .
Here is an alternative definition.
Let be the partitions of into an odd number of parts with maximum part .
Let be the same, except the number of parts should be even.
Let be the number of elements in .
Then , with .
Here is :
✕
\[Nu]@10 /. 0 -> "\[CenterDot]" // MatrixForm |
To verify that , look at the partitions of 10:
✕
Row /@ IntegerPartitions@10 |
The ones with maximum part 5 are:
✕
Select[IntegerPartitions@10, Max@# == 5 &] |
Applying to each of those gives:
✕
PartitionMu /@ % |
Minus the sum is 2, so , as claimed.
As Jacobi wrote, “Always invert!” (referring to elliptic integrals). This is :
✕
Inverse[\[Nu]@15] /. 0 -> "\[CenterDot]" // MatrixForm |
What is the sequence in the second column, ?
You can find the sequence at the OEIS by looking it up. That hits A126796: Number of complete partitions of n, which is a great start in understanding the matrix !
For the matrix γ, let’s look at subpartitions and subsums of a partition. A subpartition of a partition is a submultiset of . For instance, is a subpartition of . A subsum is the sum of a subpartition. So there are eight ( subsums of corresponding to the eight subpartitions of :
✕
Text@Grid[ Transpose@{Prepend[Subsets@{3, 1, 1}, "subpartition"], Prepend[Total /@ Subsets@{3, 1, 1}, "subsum"]} ] |
Now let’s look at complete partitions. We define a partition to be complete if it has all possible subsums .
Here are the five complete partitions of 6:
✕
Row /@ Select[IntegerPartitions@6, CompletePartitionQ] |
And here are the partitions of 6 that are not complete:
✕
Row /@ Select[IntegerPartitions@6, Not@*CompletePartitionQ] |
This is the sequence counting the number of complete partitions of :
✕
Complete /@ Range[0, 10] |
Consider the partition 7311. We get the subsums 1, 2, 3, 4, 5 easily from 311. But we cannot get 6, so 7311 is not complete.
Qualitatively, if a part is too large relative to the other parts, we cannot get some intermediate subsums. Park’s condition makes this precise.
Theorem (Park): A partition with is complete iff and for each , , .
For example, is not complete (no subsum is 3) because .
Using Park’s condition, it is easy to check—but only if you want!—that the conjugate of a distinct partition is a complete partition.
Given a non-negative integer , define a partition to be -step iff and for each , , . Define the empty partition to be the only zero-step partition.
Clearly, a one-step partition is a complete partition:
✕
Row /@ CompletePartitions@5 |
Here are the -step partitions of 5, for :
✕
Row /@ KStepPartitions[5, 1] |
✕
Row /@ KStepPartitions[5, 2] |
✕
Row /@ KStepPartitions[5, 3] |
✕
Row /@ KStepPartitions[5, 4] |
This is the same as the partitions of 5 with no restrictions:
✕
Row /@ KStepPartitions[5, 5] |
Define to be the number of -step partitions of :
✕
pre\[Gamma]@10 /. 0 -> "\[CenterDot]" // MatrixForm |
The second column is the number of complete partitions of is .
Define the matrix by , . In words, the columns of are the number of -step partitions shifted down to form a lower-triangular matrix.
Here is the matrix :
✕
\[Gamma]@10 /. 0 -> "\[CenterDot]" // MatrixForm |
It matches the inverse of :
✕
Inverse@\[Nu]@10 /. 0 -> "\[CenterDot]" // MatrixForm |
We can now put everything together. The inverse of the matrix matches the matrix , which is the main theorem for this blog.
Theorem. For each , , the identity matrix.
This presents the situation when :
✕
Row[{\[Nu]@6 /. 0 -> "\[CenterDot]" // MatrixForm, Style[" \[CenterDot] ", 16], \[Gamma]@6 /. 0 -> "\[CenterDot]" // MatrixForm, Style[" = ", 16], IdentityMatrix@6 /. 0 -> "\[CenterDot]" // MatrixForm}] |
Hanna conjectured that
, (1)
where is the sequence that counts the number of complete partitions of .
Proof: Rewrite the desired identity as
(2)
or
, (3)
where the last sum is over all complete partitions of .
We claim every partition contains a maximal complete subpartition. For example, has the maximal complete subpartition . If the maximal subpartition of partitions , then cannot be a part of the original partition . If it were, we could insert it into , contradicting its maximality.
Furthermore, there is no constraint on the parts in larger than because the fact that is missing in means that no larger complete subpartition can be produced. Hence generates all partitions whose maximal complete subpartition is a partition of .
Summing over all gives (3), and consequently (1).
Identifying coefficients for like powers of proves that , the second column of . The straightforward bookkeeping generalization then proves the main theorem for the other columns.
Here is a proof of the main theorem by example. Consider the dot product of row 10 of with :
✕
Row[{MatrixForm@{Last[\[Nu]@10]}, " \[CenterDot] ", MatrixForm@\[Gamma][10][[All, 2]]}] |
An entry from is the difference between the number of distinct partitions of odd and even length. Here are these partitions:
✕
Table[Row /@ DistinctPartitionsByMax[10, m], {m, 10}] |
Here are the complete partitions counted in the third column of :
✕
MatrixForm@Join[{{}, {}}, Table[Row /@ CompletePartitions[i], {i, 6}]] |
Count them up and recall that the sequence for the number of complete partitions starts like this:
✕
Complete /@ Range[6] |
Consider the fifth term in the dot product: 2×2. It comes from all possible pairs .
That is:
,
,
,
.
The signs of those pairs are all negative, because the four distinct partitions all have an odd number of parts. Using β, we will find four other terms in the dot product that have the opposite sign to get cancellation.
Let be the set of distinct partitions and be the set of complete partitions.
Define the function as follows.
Let and .
In other words:
In the previous example, we had four pairs. Here is how changes them:
,
,
,
.
The resulting pairs are still . However, the odd length becomes an even length, giving the cancellation. Also, reverses itself, so we get complete cancellation.
Formally, the function changes the parity of the length of the distinct partition and is an involution on the set of pairs. Therefore, the dot product is zero.
A composition of is a finite sequence of non-negative integers with sum . So unlike an integer partition, order matters. For example, the two compositions and are different.
Allowing 0 as a part only make sense if the number of parts is specified.
A strict composition of is a finite sequence of positive integers with sum . Here are the strict compositions of 4:
✕
Row /@ (C4 = StrictCompositions@4) |
Let be the number of parts of the composition . Here are the lengths of the compositions just shown:
✕
Length /@ C4 |
As is for partitions, so is for strict compositions. Let’s define the matrix by , where , . The sum is over all strict compositions of with maximum part and is the number of parts of .
For example, for , , these are the strict compositions:
✕
Row /@ Select[C4, Max@# == 2 &] |
Three have odd length and one has even length, so .
Just like the matrix before, we define the matrix by , :
✕
\[Sigma]@10 /. 0 -> "\[CenterDot]" // Grid |
Take the inverse of . What are these numbers?
✕
Inverse[\[Sigma]@10] /. 0 -> "\[CenterDot]" // Grid |
Looking up the second column in the OEIS leads to A002321, which is enough to lead to a conjecture. To formulate it, we define two lower-triangular matrices and .
Let be the lower-triangular matrix of all 1s:
✕
\[Alpha]@10 /. 0 -> "\[CenterDot]" // Grid |
Define the lower-triangular matrix by where :
✕
\[Chi]@10 /. 0 -> "\[CenterDot]" // Grid |
Let’s calculate the matrix product:
✕
\[Alpha][10].\[Chi][10].\[Alpha][10] /. 0 -> "\[CenterDot]" // Grid |
That matrix product matches the inverse of :
✕
Inverse[\[Sigma][10]] /. 0 -> "\[CenterDot]" // Grid |
This led me to conjecture that .
Who will prove it?
The relevant OEIS triangles are A134542, A134541, A000012 and A054525.
It is remarkable that the two kinds of partitions should be connected so simply by matrix inversion. That strict compositions are related to the multiplicative Möbius function again via matrix inversion amazes me. Are there more such pairs in the universe of additive number theory?
Andrews, G., G. Beck and B. Hopkins. “On a Conjecture of Hanna Connecting Distinct Part and
Complete Partitions.” Annals of Combinatorics 24 (2020): 217–24.
Brown, J. L. “Note on Complete Sequences of Integers.” The American Mathematical Monthly 68,
no. 6 (1961): 557–60.
Hoggatt, V. E and C. H. King. “Problem E-1424.” The American Mathematical Monthly
67 (1960):
593.
MacMahon, P. A. Combinatory Analysis, vol. 1. Cambridge: Cambridge University Press, 1915.
OEIS Foundation Inc. (2019), The On-Line Encyclopedia of Integer Sequences, oeis.org.
Park, S. K. “Complete Partitions.” Fibonacci Quarterly 36, no. 4 (1998): 354–60.
Park, S. K. “The r-Complete Partitions.” Discrete Mathematics 183 (1998): 293–97.
Schneider, R. “Arithmetic of Partitions and the q-Bracket Operator.” Proceedings of the
American
Mathematical Society 145 (2017): 1953–68.
This is particularly true in the field of particle physics, where a new geometrical object has been found to be connected to particle dynamics: the amplituhedron. It represents a novelty not only in physics but also in mathematics, generalizing the concept of a convex polygon. In this blog post, I will first discuss its relation to particle physics, and then how to visualize its geometry using the Wolfram Language.
I’m a PhD student in theoretical physics at Durham University, United Kingdom. I was born in Venice, Italy, and I did my bachelor’s and master’s degrees in physics in the city of Trieste. After earning my degree, I was lucky enough to get into a wonderful PhD program (ITN) called SAGEX, which is funded by the European Union. We are a group of 15 early-stage researchers and as many supervisors distributed among 8 different academic institutions around Europe. The purposes of the project are to investigate the geometric structure hidden in the laws of particle dynamics and spread the word about all the amazing new discoveries in the field, like the amplituhedron.
As part of my SAGEX training, I’m spending three months as a visiting scholar at Wolfram Research in Champaign, Illinois. During my time here, I’ve started discovering all the features that the Wolfram Language offers to deal with many different geometries and represent them graphically. I spent a year or so working on the amplituhedron, and had a lot of fun creating a series of sketched drawings while trying to get some intuition about this funny object. Now that my experience at Wolfram is coming to an end, I think the time has come to transform those wrinkled leaves into some incredible, colorful pictures using the Wolfram Language.
Our knowledge of elementary particles and their nature is almost entirely based on scattering experiments. Every day, in many laboratories around the world, particles such as electrons and protons are accelerated to extremely high velocity and crashed into one another. After the collision, the kinetic energy of crashing particles is converted into new particles. These new particles will then scatter in all directions, eventually hitting a detector where their velocity, charge and mass are recorded.
So, here’s the big question theoretical physicists are trying to answer: If I shoot two protons into one another, which type of particles might appear? In which direction and with what velocity? Physics is about making predictions. Trying to answer these types of questions corresponds to investigating the fundamental nature of particle interaction.
It just so happens that being able to give exact answers to this question is almost impossible. However, we can formulate approximate solutions with astonishing precision. The theory behind this representation of particle dynamics is called perturbative quantum field theory (QFT), and it was mainly the result of the work of S. Tomonaga, J. Schwinger and R. Feynman, who together won the Nobel Prize in Physics in 1965. Many mathematical steps are needed to carry out these calculations and then compare them with the data coming from the collision experiments.
At the core of this sophisticated procedure, there is a graph theory/combinatorial problem that is usually indicated as “the sum over all possible Feynman diagrams.” The result of this calculation is called a scattering amplitude, or sometimes just amplitude. Most of the time this step represents a bottleneck for the whole calculation. This is because its complexity grows factorially with the number of particles involved in the scattering and the precision we want to obtain. Increasing these two parameters quickly makes the computation impossible even for supercomputers.
In the last 20 years, an enormous amount of progress has been made on the study of amplitudes. It’s a story in which the main character is the gluon, the mediator of the nuclear force. Here, I would like to highlight two remarkable discoveries, both from 2009, that are directly responsible for the discovery of the amplituhedron. The first is that a particular class of gluon amplitude can be thought of as a volume of a polytope. The second one is that all amplitudes in a particular theory, called planar N = 4 SYM, are strictly connected to the Grassmannian, which is a space of hyperplanes. In December 2013, N. Arkani-Hamed and J. Trnka published “The Amplituhedron” and were able to make a connection between these two amazing facts, opening new perspectives and puzzles. You can read more about this in J. Bourjaily and H. Thomas’s article, “What Is the Amplituhedron?”.
The amplituhedron is a very general geometric object that can appear in many forms. There are three important parameters that define the amplituhedron, usually denoted , and . is the dimension in which the amplituhedron lives. is the number of points that we use to build it. This parameter physically corresponds to the number of particles participating in the scattering process. The last one, , is more subtle both geometrically and physically. For , the amplituhedron is a four-dimensional polytope, the generalization of a two-dimensional polygon or three-dimensional polyhedron to higher dimensions.
A polygon can be thought of as the set of points trapped inside a curve given by many segments. As I will try to show you, the amplituhedron for generalizes this idea to hyperplanes. For example, for , the amplituhedron will be given by a set of lines trapped inside an edge curve. Physically, the parameter roughly indicates the type of particles scattering.
The main goal of this blog post is to focus on the case and use the Wolfram Language to create a visual representation of this weird set of lines, and also to show in which sense it represents the natural generalization of a polygon.
The amplituhedron has a stunning compact definition that can be given in one line, but to understand it we need to be able to read the language of projective geometry.
Projective geometry is an incredibly powerful tool to map complex geometric problems to elementary linear algebra problems. The basic idea is that I can represent points in a plane as lines passing though the origin in three dimensions:
As you can see in the image, this correspondence can be built by fixing a plane and considering the intersection between the plane and the homogenous lines. In general, two points are needed to identify a line but, since each line is passing though the origin, one point is enough to determine it. In this image, the points are labeled by the . You will notice that, if I multiply the coordinates of one of the ’s by a constant, the effect will be to move it along the line it represents. So for example, both and will be equivalent because they represent the same line. This is in fact the formal way in which the projective space is defined. The two-dimensional projective space —that is, the one represented in the picture—is defined as the set of points in three dimensions identified by the equivalence relation .
OK, now that we have this fancy way to think about points, what we can do with it? First of all, notice that depending on which plane we project, the distances between points can change. The idea is that we would like to just work with lines in three dimensions, not dependent on the plane we are projecting on.
What are the kinds of questions we can ask that do not depend on the specific projective plane we choose? There is a question that is particularly easy to answer in this setup: are three points aligned? In fact, if three points are aligned, the three vectors representing them will all be contained in a plane. This means that they will be linearly dependent, and therefore the determinant will be 0.
What if the determinant is different from 0? Since the are projective, we can’t say much. In fact, sending to changes the sign of the determinant from positive to negative. One can decide, therefore, to be more restrictive and consider half-lines instead of lines. This amounts to restricting the equivalence relation to have . In this way, the sign of the determinant becomes invariant under the rescaling and becomes something we can meaningfully talk about.
There is a simple trick in three dimensions to understand if a determinant is positive or negative: the right-hand rule. Using this rule, you can easily see that the determinant will be negative while the determinant will be positive. You can see this for yourself by playing with it, saying that point is on the right side of the line , which is the same as saying that . Beware, though! The notion of right and left changes if you are watching the projective plane from below or from above, in the same way that your right hand corresponds to the left hand of your mirror image.
We will start by exploring the first way in which the amplituhedron can appear: the convex polygon. It’s in this projective formulation that A. Hodges in 2009 recognized that a particular gluon scattering amplitude was indeed the volume of a polytope in . Unfortunately, we cannot see in four dimensions, so we will stick to without actually losing much of the general picture. Let’s start from the simplest polygon, the triangle.
Suppose I give you three vertices , and I ask you to parametrize the space of all points inside the triangle. It seems like a simple yet very annoying task, doesn’t it? But there is an incredibly efficient way to do it!
Using physics to get some intuition, we can consider that each of these objects has a mass, and we want to calculate their common center of mass. The center of mass will correspond to the weighted average:
✕
A = (\!\(\*SubscriptBox[\(m\),\(1\)]\)\!\(\*SubscriptBox[\(P\),\(1\)]\)+\!\(\*SubscriptBox[\(m\),\(2\)]\)\!\(\*SubscriptBox[\(P\),\(2\)]\)+\!\(\*SubscriptBox[\(m\),\(3\)]\)\!\(\*SubscriptBox[\(P\),\(3\)]\))/(\!\(\*SubscriptBox[\(m\),\(1\)]\)+\!\(\*SubscriptBox[\(m\),\(2\)]\)+\!\(\*SubscriptBox[\(m\),\(3\)]\)); |
… where the mass parameters are clearly positive. The idea is that the center of mass will always lie somewhere among the three points depending on the values of the individual masses. We could be satisfied with this result, but we can actually do better. Let’s look at our triangle projectively. This time we will try to perform some visualization, so let’s choose some coordinates for the points :
✕
P = 2 Table[RandomReal[], 3, 2]; |
You can confirm that the type of logic we have used in two dimensions is also valid for points in three dimensions. We can construct these three-dimensional points out of the two-dimensional ones just by adding a new unit coordinate to all our points. We will call the new three-dimensional points newP.
Consider now the point , which can be written in the Wolfram Language as:
✕
M = {\!\(\*SubscriptBox[\(m\),\(1\)]\),\!\(\*SubscriptBox[\(m\),\(2\)]\),\!\(\*SubscriptBox[\(m\),\(3\)]\)} A = M.newP |
The point is not exactly on the triangle defined by the points , because of the factor. But here comes the change of perspective. Let’s think of all our points as half-lines (or oriented lines) passing though the origin:
If we rescale our points by a factor , they will always represent the same line. This means that we can rescale so that it lies on the triangle—or even better, just invoke the equivalence relation .
We can summarize by saying that the inside of a triangle with vertices ∈ can be parametrized with the map .
The beauty of this projective construction of the inside of a triangle is that it’s generalized in a straightforward way to polygons. Suppose this time instead of three points, we have five points ∈ . Following the center of mass argument and knowing that can be rescaled, we can immediately say that the inside of five points will be given by:
✕
A=\!\(\*SubscriptBox[\(m\),\(1\)]\)\!\(\*SubscriptBox[\(P\),\(1\)]\)+\!\(\*SubscriptBox[\(m\),\(2\)]\)\!\(\*SubscriptBox[\(P\),\(2\)]\)+\!\(\*SubscriptBox[\(m\),\(3\)]\)\!\(\*SubscriptBox[\(P\),\(3\)]\)+\!\(\*SubscriptBox[\(m\),\(4\)]\)\!\(\*SubscriptBox[\(P\),\(4\)]\)+\!\(\*SubscriptBox[\(m\),\(5\)]\)\!\(\*SubscriptBox[\(P\),\(5\)]\) |
Great, but this time there is a new element coming into play. Points can be arranged in different ways!
You can see that five points define a pentagon only if they are distributed in a specific way. So how can I be sure that the points I have chosen form a convex polygon? You can see that in the case of a convex polygon, if I take a line along one of the edges, like the line , all the other points will be on the same side of the line. This is not true for the line in the first image. But we know how to express this concept mathematically: the concept of “being on the right side of a line” using the determinant! So, if we have n ordered points and we want to use them to represent a convex polygon like the one in the image, we must impose that for all If one considers the matrix , where is the number of points and is the dimension of the embedding space, the convexity condition is equivalent to saying that all the ordered minors of the maxtrix must be positive:
In this description of the inside of a polygon, you can see that there is a pairing of two positive spaces: the space of the mass-parameters vector , where ; and the convexity of the space of points, where Minors and stands for the dimension of the embedding space where our point lives. In the case of polygons, . In the next section, we will finally see the true novelty introduced by the amplituhedron in geometry: the Grassmannian polytope.
I previously pointed out that a polygon can be described as the set of points that are on the right side of a bunch of lines. What if instead of points, we would like to describe a space made of lines? Can we come up with some similar concept? The amplituhedron is exactly the answer to this question and generalizes this idea not only to lines but also to any hyperplane in any dimension. We will focus now on the space of lines in , known as the Grassmannian , and here we will build the amplituhedron.
First, how can we mathematically describe a line in projective space? Well, one line is identified by two points. So instead of having just , I can think of it as having two points, and . If I just rewrite the definition of a polytope and promote the parameters vector to a matrix, I get:
✕
({ {A}, {B} }) = M.P |
Now I have to find the inequalities defining the amplituhedron. If I see as a matrix, I can rephrase the polygon positivity constraint by saying that all the ordered minors of are positive. You can see now that this definition naturally generalizes to matrices! Using this definition, the amplituhedron will be given by the pair of points parametrized by the formula , where all the ordered minors of both and are positive.
What does this space of lines look like? There are various ways to represent this space, as you will see. First of all, we have to fix some coordinates. It’s true that two points identify a line, but this is a redundant description. In fact, I can slide the points along the line, and they will keep describing the same line. In the embedding space, the points and are two vectors that span a plane. In other words, if is a matrix with and will identify the same line in the projective space. The determinant is fixed to be positive to preserve the orientation of the line:
Let’s look now to the specific case in which I have four points defining our amplituhedron in , meaning that the embedding space is four dimensional. First, I need the minors of to be positive. I can easily achieve this by choosing to be equal to the identity matrix. Then I have to take care of the positivity of the matrix . I can use the equivalence relation to reduce the number of parameters to four, and write the matrix as:
✕
M = ({ {1, a, 0, -b}, {0, c, 1, d} }) |
I choose this strange parametrization because its minors are particularly simple. In fact, if I calculate the minors and impose the positivity constraint, I get:
✕
ineq = # > 0 & /@ Flatten[Minors[M, 2]] |
Then, using Reduce to solve the inequalities, I obtain:
✕
Reduce[ineq] |
It’s a set of four inequalities as simple as it can be. To visualize this space of lines, first I have to project everything down to three dimensions. Any choice of the projective plane is valid. Here, I choose a plane orthogonal to the vector . I then sample the parameter space, generating two thousand lines. I also add in the same plot of the tetrahedron formed by the points , and we get the result:
You can see that the amplituhedron in this case corresponds to lines passing inside a tetrahedron—or a bundle of wheat, if you like. You will also notice that two faces of the tetrahedron are crossed by the lines, while the other two are not. This image is suggestive but can also be misleading. In fact, there are lines that intersect the and the faces of the amplituhedron that do not belong to the amplituhedron. To get the correct intuition, we have to look at the allowed regions for . You can see that is given by the product , which is exactly the definition we gave for the triangle. Therefore, belongs to the face of the tetrahedron, which corresponds to the red triangle in the following image. The point instead is defined as , which looks like a triangle but with a negative parameter. Again using the analogy with the center of mass, you can think of as having negative mass. Instead of dragging the center of mass toward its position, the negative term repels it toward infinity. I have represented its domain with the blue region at the bottom of the tetrahedron. One can prove that a line crossing the blue and red triangles will automatically also cross the green and yellow triangles. In fact, for different parametrizations of , we would have had belonging to the green triangle and to the yellow one.
Based on this image, we can say that the amplituhedron is the set of oriented lines intersecting the blue, green and yellow regions in this order, or any other cyclic variant of it. For example, another allowed sequence would be green, red, yellow, blue.
Last but not least, there’s still another very effective way to represent the amplituhedron. When I imagine a polygon, I usually think of its edges, and for a polyhedron I think of its faces. With respect to the description given previously, focused on the interior, this image corresponds to boundary representations. The boundaries of the space of lines defining the amplituhedron correspond to the boundary of the parameter space—that is, when one of the inequalities gets saturated. For example, you can see that if I take the limit , the point will be equal to . This means that the segment is a boundary that the line cannot cross. By representing all boundary components, I can obtain an elegant and essential visualization. When I plot the boundaries with their orientations and one element of the amplituhedron, this is the result:
This image represents the amplituhedron as the set of all oriented lines trapped inside a red polygonal spiral. One thing that can be deceptive in this representation is that it seems that the gray line can be parallel to . This is actually false. What we mean by that is actually that both lines have the same direction. But this implies that they belong to the same plane, and therefore will be equal to 0.
Finally, I would like to show you what an n-point amplituhedron looks like in three dimensions. This time, I have to be sure that all the matrix minors are positive. In order to do this, I choose to distribute the points along a spiral. Distances in this business have no meaning, so I can choose to distribute the points in any way I like as long they satisfy the convexity requirement. I tried to distribute them in the nicest way, obtaining the polyhedron you see here:
As a final step, I can highlight the boundary of the amplituhedron. The -point amplituhedron’s boundary is given by the union of the segments . There is a boundary term that is special, the segment . This segment, as we have seen for the four-point case, doesn’t go along the edge of the tetrahedron but goes along the complementary of the segment . The result:
I have smoothed down the polygonal spiral to give the illusion of an arbitrary number of points. In an analogy to the tetrahedron, we can say that the -point amplituhedron in this example is given by all the oriented lines trapped inside the spiral.
In this blog post, I focused on one particular type of amplituhedron that lives in , the space of lines in three projective dimensions. I have chosen this example because it’s the only amplituhedron with a physical interpretation that lives in fewer than four dimensions. One thing I really like about these spaces of lines or planes is that they are constructed using very primitive ideas. They force you to think about basic questions such as: What is a triangle? How do I usually picture it in my mind? I think these Grassmannian geometries really deserve a graphical representation capable of exposing their beauty and simplicity. It has been fun trying to achieve this for the amplituhedron, and I hope that you had fun too!
For those of you who were so determined to get to the bottom of this post, there are a lot other interesting ideas related to the amplituhedron that I encourage you to check out. You can see this blog post as training for this amazing marathon of lectures (you will see why it’s called a marathon) held by Nima Arkani-Hamed in June 2018 at the Center for Quantum Mathematics and Physics (QMAP) at UC Davis. In particular, the second lecture is a practical introduction to projective geometry, while the third one introduces the concept of the canonical form—the most notably absent topic in this post. The canonical form is in fact the map that connects the amplituhedron with actual amplitudes. Like the amplituhedron itself, it represents a novelty in mathematics and is a fascinating topic in its own right. Enjoy the run!
This project has received funding from the European Union’s Horizon 2020 research and innovation program under the Marie Skłodowska-Curie grant agreement no. 764850.
Get full access to the latest Wolfram Language functionality with a Mathematica 12 or Wolfram|One trial. |