Wolfram Blog News, views, and ideas from the front lines at Wolfram Research. 2017-08-15T20:18:05Z http://blog.wolfram.com/feed/atom/ WordPress Jeffrey Bryant <![CDATA[Double Eclipse! Or Why Carbondale, Illinois, Is Special]]> http://blog.internal.wolfram.com/?p=37603 2017-08-14T15:38:02Z 2017-08-14T15:38:02Z Eclipse paths crossing

The upcoming August 21, 2017, total solar eclipse is a fascinating event in its own right. It’s also interesting to note that on April 8, 2024, there will be another total solar eclipse whose path will cut nearly perpendicular to the one this year.

GeoGraphics[{Red,   SolarEclipse[DateObject[{2017, 8, 21}], "GraphicsData",     EclipseType -> "Total"],    SolarEclipse[DateObject[{2024, 4, 8}], "GraphicsData",     EclipseType -> "Total"]},   GeoCenter ->    Entity["City", {"Carbondale", "Illinois", "UnitedStates"}],   GeoRange -> Quantity[500, "Miles"]]

With a bit of styling work and zooming in, you can see that the city of Carbondale, Illinois, is very close to this crossing point. If you live there, you will be able to see a total solar eclipse twice in only seven years.

GeoGraphics[{{GeoStyling["StreetMap"],     Polygon[Entity[      "AdministrativeDivision", {"Illinois", "UnitedStates"}]]}, {Red,     SolarEclipse[DateObject[{2017, 8, 21}], "TotalPhaseCenterLine",      EclipseType -> "Total"], GeoStyling[GrayLevel[0, .2]],     SolarEclipse[DateObject[{2017, 8, 21}], "TotalPhasePolygon",      EclipseType -> "Total"]}, {Red,     SolarEclipse[DateObject[{2024, 4, 8}], "TotalPhaseCenterLine",      EclipseType -> "Total"], GeoStyling[GrayLevel[0, .2]],     SolarEclipse[DateObject[{2024, 4, 8}], "TotalPhasePolygon",      EclipseType -> "Total"]}},   GeoCenter ->    Entity["City", {"Carbondale", "Illinois", "UnitedStates"}],   GeoRange -> Quantity[200, "Miles"], GeoBackground -> "Satellite"]

Let’s aim for additional precision in our results and compute where the intersection of these two lines is. First, we request the two paths and remove the GeoPosition heads.

line1 = DeleteCases[    SolarEclipse[DateObject[{2017, 1, 1, 0, 0}],      "TotalPhaseCenterLine", EclipseType -> "Total"],     GeoPosition, {0, \[Infinity]}, Heads -> True]; line2 = DeleteCases[    SolarEclipse[DateObject[{2024, 1, 1, 0, 0}],      "TotalPhaseCenterLine", EclipseType -> "Total"],     GeoPosition, {0, \[Infinity]}, Heads -> True];

Then we can use RegionIntersection to find the intersection of the paths and convert the result to a GeoPosition.

intersection = GeoPosition[RegionIntersection[line1, line2][[1, 1]]]

By zooming in even further and getting rid of some of the style elements that are not useful at high zoom levels, we can try to pin down the crossing point of the two eclipse center paths.

GeoGraphics[{Red,    SolarEclipse[DateObject[{2017, 8, 21}], "TotalPhaseCenterLine",     EclipseType -> "Total"],    SolarEclipse[DateObject[{2024, 4, 8}], "TotalPhaseCenterLine",     EclipseType -> "Total"], PointSize[.01], Point[intersection]},   GeoCenter -> intersection, GeoRange -> Quantity[1, "Miles"]]

It appears that the optimal place to be to see the longest eclipse for both locations would be just southwest of Carbondale, Illinois, near the east side of Cedar Lake along South Poplar Curve Road where it meets with Salem Road—although anywhere in the above image would see a total solar eclipse for both of the eclipses.

GeoImage[intersection, GeoRange -> Quantity[.2, "Miles"]]

Because of this crossing point, I expect a lot of people will be planning to observe both the 2017 and 2024 solar eclipses. Don’t wait until the last minute to plan your trip!

So how often do eclipse paths intersect? We can find out with a little bit of data exploration. First, we need to get the dates, lines and types of all eclipses within a range of dates. Let’s limit ourselves to 40 years in the past to 40 years in the future for a total of about 80 years.

dates = SolarEclipse[{DateObject[{1977, 1, 1, 0, 0}],      DateObject[{2057, 1, 1, 0, 0}], All}, "MaximumEclipseDate",     EclipseType -> "Total"];

paths = SolarEclipse[{DateObject[{1977, 1, 1, 0, 0}],      DateObject[{2057, 1, 1, 0, 0}], All}, "TotalPhaseCenterLine",     EclipseType -> "Total"];

types = SolarEclipse[{DateObject[{1977, 1, 1, 0, 0}],      DateObject[{2057, 1, 1, 0, 0}], All}, "Type",     EclipseType -> "Total"];

Keep only the eclipses for which path data is available.

totaleclipsedata =    Select[Transpose[{dates, paths,       types}], (! MatchQ[#[[2]], _Missing]) &];

Generate a set of all pairs of eclipses.

pairs = Subsets[totaleclipsedata, {2}];

Now we define a function to test if a given pair of eclipse paths intersect.

intersectionOfLines[triple1_, triple2_] :=   GeoPosition[   RegionIntersection[triple1[[2]] /. GeoPosition[x_] :> x,      triple2[[2]] /. GeoPosition[x_] :> x][[1, 1]]]

Finally, we apply that function to all pairs of eclipses and keep only those for which an intersection occurs.

intersections =    Select[Quiet[     Transpose[{intersectionOfLines @@@ pairs, pairs}], {Part::partd}],     FreeQ[#, EmptyRegion] &];

It turns out that there are a little over one hundred eclipses that intersect during this timespan.

intersections // Length

We can visualize the distribution of these eclipse paths to see that many occur over the oceans. The chances of an intersection being within driving distance of a given land location is far lower.

GeoGraphics[{Red,    GeoMarker[GeoPosition[intersections[[All, 1]][[All, 1]]]]}]

So take advantage of the 2017 and 2024 eclipses if you live near Carbondale, Illinois! It’s unlikely you will see such an occurrence anytime soon without making a lot more effort.


Download this post as a Computable Document Format (CDF) file. New to CDF? Get your copy for free with this one-time download.

]]>
1
Jeffrey Bryant <![CDATA[Get Ready for the Total Solar Eclipse of 2017]]> http://blog.internal.wolfram.com/?p=37504 2017-08-08T15:44:09Z 2017-08-08T14:50:44Z Eclipse illustrations with the Wolfram Language

On August 21, 2017, an event will happen across parts of the Western Hemisphere that has not been seen by most people in their lifetimes. A total eclipse of the Sun will sweep across the face of the United States and nearby oceans. Although eclipses of this type are not uncommon across the world, the chance of one happening near you is quite small and is often a once-in-a-lifetime event unless you happen to travel the world regularly. This year, the total eclipse will be within driving distance of most people in the lower 48 states.

Total eclipses of the Sun are a result of the Moon moving in front of the Sun, from the point of view of an observer on the Earth. The Moon’s shadow is quite small and only makes contact with the Earth’s surface in a small region, as shown in the following illustration.

Lunar eclipse illustration

We can make use of 3D graphics in the Wolfram Language to create a more realistic visualization of this event. First, we will want to make use of a texture to make the Earth look more realistic.

texture =    ImageCrop[PlanetData["Earth", "CylindricalEquidistantTexture"]];

We can apply the texture to a rotated spherical surface as follows.

earthGraphic3D =    Rotate[ParametricPlot3D[{Cos[t] Cos[p], Sin[t] Cos[p], Sin[p]}, {t,        3/2 Pi, 7/2 Pi}, {p, -Pi/2, Pi/2}, Mesh -> None,       PlotStyle -> Texture[texture]][[1]], 23.5 \[Degree], {0, 1, 0}];

We represent the Earth’s shadow as a cone.

earthShadowCone = {GrayLevel[0, .5],     Cone[{{-0.3, 0, 0}, {-11, 0, 0}}, 1]};

The Moon can be represented by a simple Sphere that is offset from the center of the scene, while its orbit is a simple dashed 3D path. Both are parameterized since the Moon’s orbit will precess in time. It’s useful to be able to supply values to these functions to get the shadow to line up where we want.

moonGraphic3D[\[Phi]_, \[Theta]_] :=   With[{moonOrbitInclination = 5.16 \[Degree]}, {GrayLevel[.5],     Sphere[6 {Cos[\[Theta]] Cos[         moonOrbitInclination Cos[\[Theta] + \[Phi]]],        Sin[\[Theta]] Cos[moonOrbitInclination Cos[\[Theta] + \[Phi]]],        Sin[moonOrbitInclination Cos[\[Theta] + \[Phi]]]}, 0.4]}]

moonOrbit[\[Theta]_] :=   With[{moonOrbitInclination = 5.16 \[Degree]},    ParametricPlot3D[     6 {Cos[x] Cos[moonOrbitInclination Cos[x + \[Theta]]],        Sin[x] Cos[moonOrbitInclination Cos[x + \[Theta]]],        Sin[moonOrbitInclination Cos[x + \[Theta]]]}, {x, 0, 2 \[Pi]},      PlotStyle -> {Red, Dashed, AbsoluteThickness[1]},      PlotPoints -> 100][[1]]]

As with the Earth’s shadow, we represent the Moon’s shadow as a cone.

moonShadowCone[\[Phi]_, \[Theta]_] :=   With[{moonOrbitInclination = 5.16 \[Degree]}, {GrayLevel[0, .7],     Cone[{#, # - {7, 0, 0}}, .4] &[     6 { Cos[\[Theta]] Cos[#], Sin[\[Theta]] Cos[#], Sin[#]} &[      moonOrbitInclination Cos[\[Theta] + \[Phi]]]]}]

Finally, we create some additional scene elements for use as annotations.

sunArrow = {Hue[0.2, 0.5, 1], Arrowheads -> Small,     Arrow[{{2, 0, 0}, {8, 0, 0}}]};

labels = {Text[Style["To Sun", Hue[0.2, 0.5, 1]], {8.5, -0.7, 0}],     Text[Style["Moon's Orbit", Hue[1, 0, 0.92]], {-2, -8, 1}]};

Now we just need to assemble the scene. We want the Moon to be directly in line with the Sun, so we use 0° as one of the parameters to achieve that. To precess the orbit in such a way that the shadow falls on North America, we use 70°. The rest is just styling information.

Graphics3D[{moonOrbit[70 \[Degree]],    moonGraphic3D[70 \[Degree], 0 \[Degree]],    moonShadowCone[70 \[Degree], 0 \[Degree]], earthShadowCone,    earthGraphic3D, sunArrow, labels}, Axes -> False,   PlotRange -> {{-13, 8}, {-6.5, 6.5}, {-3, 3}},   Background -> GrayLevel[0.2],   Lighting -> {{"Ambient", GrayLevel[.7]}, {"Point",      Hue[0.2, 0.5, 1], {28, 0, 0}}}, Boxed -> False, ViewAngle -> 0.14,   ViewCenter -> {{0.5, 0.52, 0.5}, {0.2326, 0.6112}},   ViewPoint -> {1.1, -2.5, 2.0}, ViewVertical -> {-0.02, -0.01, 0.99},   ImageSize -> {500, 312}]

Entity["PlanetaryMoon", "Moon"][{"Eccentricity", "Inclination"}]

This means that due to the eccentric orbit, sometimes the Moon is further away from Earth than at other times; it also means that, due to the orbital inclination, it may be above or below the plane of the Earth-Sun orbit. Usually when the Moon passes “between” the Earth and the Sun, it is either “above” or “below” the Sun from the point of view of an observer on the Earth’s surface. The geometry is affected by other effects but, from time to time, the geometry is just right, and the Moon actually blocks part or all of the Sun’s disk. On August 21, 2017, the geometry will be “just right,” and from some places on Earth the Moon will cover at least part of the Sun.

Besides illustrating eclipse geometry, we can also make use of the Wolfram Language via GeoGraphics to create various maps showing where the eclipse is visible. With very little code, you can get elaborate results. For example, we can combine the functionality of SolarEclipse with GeoGraphics to show where the path of the 2017 total solar eclipse can be seen. Totality will be visible in a narrow strip that cuts right across the central United States.

GeoGraphics[{GeoStyling[Directive[EdgeForm[], Opacity[.15]]],    SolarEclipse[DateObject[{2017, 8, 21}], "GraphicsData",     EclipseType -> "Total"]},   GeoRange -> Entity["GeographicRegion", "NorthAmerica"],   GeoProjection -> "Orthographic", GeoBackground -> "Satellite"]

So which states are going to able to see the total solar eclipse? The following example can be used to determine that. First, we retrieve the polygon corresponding to the total phase for the upcoming eclipse.

polygondata =    SolarEclipse[DateObject[{2017, 8, 21}], "TotalPhasePolygon",     EclipseType -> "Total"];

states = GeoEntities[polygondata, "AdministrativeDivision1"]

GeoGraphics[{Orange, Polygon /@ states}]

Suppose you want to zoom in on a particular state to see a bit more detail. At this level, we are only interested in the path of totality and the center line. Once again, we use SolarEclipse to obtain the necessary elements.

totalityCenterLine =    SolarEclipse[DateObject[{2017, 8, 21}], "TotalPhaseCenterLine",     EclipseType -> "Total"];

totalityPolygon =    SolarEclipse[DateObject[{2017, 8, 21}], "TotalPhasePolygon",     EclipseType -> "Total"];

Then we just use GeoGraphics to easily generate a map of the state in question—Wyoming, in this case.

GeoGraphics[{totalityPolygon, Red, totalityCenterLine},   GeoRange ->    Entity["AdministrativeDivision", {"Wyoming", "UnitedStates"}]]

We can make use of the Wolfram Data Repository to obtain additional eclipse information, such as timing of the eclipse at various locations.

pathData =    ResourceData[    "Path of the Total Solar Eclipse of August 21st, 2017"];

timeLocationData =    DeleteMissing[    pathData[[     All, {"UT1", "NorthernLimit", "CentralLine", "SouthernLimit"}]],     1, 2];

We can use that data to construct annotated time marks for various points along the eclipse path.

timeMarks = {Thickness[.0025], GrayLevel[1, .5],    Table[{      Line[{timeLocationData[[i, 2]], timeLocationData[[i, 3]],         timeLocationData[[i, 4]]}], {GrayLevel[1, 1], Text[        Style[         DateString[          DateObject[timeLocationData[[i, 1]],            TimeZone ->             Entity["TimeZone", "America/Denver"]], {"Hour12", ":",            "Minute", "AMPM"}],         12, Bold], timeLocationData[[i, 4]], {0, -1},         Background -> GrayLevel[.5]]}}, {i, 1,       Length[timeLocationData], 1}]};

Then we just combine the elements.

GeoGraphics[{   Thickness[.005], {Red, totalityCenterLine,     GeoStyling[GrayLevel[0, .2]], totalityPolygon},   timeMarks   }, GeoRange ->    Entity["AdministrativeDivision", {"Wyoming", "UnitedStates"}]]

Of course, even if the eclipse is happening, there is no guarantee that you will be able to witness it. If the weather doesn’t cooperate, you will simply notice that it will get dark in the middle of the day. Using WeatherData, we can try to predict which areas are likely to have cloud cover on August 21. The following example is based on a similar Wolfram Community post.

The following retrieves all of the counties that intersect with the eclipse’s polygon bounds.

usco = GeoEntities[polygondata, "AdministrativeDivision2"];

Most of the work is involved in looking at the "CloudCoverFraction" values for each county on August 21 for each year from 2001 to 2016 and finding the mean value for each county.

countydata[county_] :=   Switch[#, {}, Missing["NotAvailable"], _, Mean[#]] &[   Cases[Flatten[     Table[Normal@       WeatherData[county["Position"],         "CloudCoverFraction", {2000 + y, 8, 21}], {y, 16}],      1], {_, n_?NumberQ} :> n]]

avcc = Map[countydata, usco] /. _Mean -> Missing["NotAvailable"];

I can then use GeoRegionValuePlot to plot these values. In general, it appears that most areas along this path have relatively low cloud cover on August 21, based on historical data.

GeoRegionValuePlot[Thread[usco -> avcc], ColorFunction -> "Rainbow",   GeoBackground -> "ReliefMap", PlotStyle -> Opacity[.75],   PlotRange -> All]

The total solar eclipse on August 21, 2017, is a big deal due to the fact that the path carries it across a large area of the United States. Make every effort to see it! Take the necessary safety precautions and wear eclipse-viewing glasses. If your kids are in school already, see if they are planning any viewing events. Just make sure to plan ahead, since traffic may be very heavy in areas near totality. Enjoy it!


Download this post as a Computable Document Format (CDF) file. New to CDF? Get your copy for free with this one-time download.

]]>
1
Stephen Wolfram http:// <![CDATA[High-School Summer Camp: A Two-Week Path to Computational Thinking]]> http://blog.internal.wolfram.com/?p=37542 2017-08-02T17:52:04Z 2017-08-02T17:52:04Z

The Summer Camp Was a Success!

How far can one get in teaching computational thinking to high-school students in two weeks? Judging by the results of this year’s Wolfram High-School Summer Camp the answer is: remarkably far.

I’ve been increasingly realizing what an immense and unique opportunity there now is to teach computational thinking with the whole stack of technology we’ve built up around the Wolfram Language. But it was a thrill to see just how well this seems to actually work with real high-school students—and to see the kinds of projects they managed to complete in only two weeks.

Wolfram Summer Camp 2017

We’ve been doing our high-school summer camp for 5 years now (as well as our 3-week Summer School for more experienced students for 15 years). And every time we do the camp, we figure out a little more. And I think that by now we really have it down—and we’re able to take even students who’ve never really been exposed to computation before, and by the end of the camp have them doing serious computational thinking—and fluently implementing their ideas by writing sometimes surprisingly sophisticated Wolfram Language code (as well as creating well-written notebooks and “computational essays” that communicate about what they’ve done).

Over the coming year, we’re going to be dramatically expanding our Computational Thinking Initiative, and working to bring analogs of the Summer Camp experience to as many students as possible. But the Summer Camp provides fascinating and important data about what’s possible.

The Setup for the Camp

So how did the Summer Camp actually work? We had a lot of applicants for the 40 slots we had available this year. Some had been pointed to the camp by parents, teachers, or previous attendees. But a large fraction had just seen mention of it in the Wolfram|Alpha sidebar. There were students from a range of kinds of schools around the US, and overseas (though we still have to figure out how to get more applicants from underserved populations). Our team had done interviews to pick the final students—and I thought the ones they’d selected were terrific.

Students at the Wolfram Summer Camp

The students’ past experience was quite diverse. Some were already accomplished programmers (almost always self-taught). Others had done a CS class or two. But quite a few had never really done anything computational before—even though they were often quite advanced in various STEM areas such as math. But almost regardless of background, it was striking to me how new the core concepts of computational thinking seemed to be to so many of the students.

How does one take an idea or a question about almost anything, and find a way to formulate it for a computer? To be fair, it’s only quite recently, with all the knowledge and automation that we’ve been able to build into the Wolfram Language, that it’s become realistic for kids to do these kinds of things for real. So it’s not terribly surprising that in their schools or elsewhere our students hadn’t really been exposed to such things before. But it’s now possible—and that means there’s a great new opportunity to seriously teach computational thinking to kids, and to position them to pursue the amazing range of directions that computational thinking is opening up.

It’s important, by the way, to distinguish between “computational thinking” and straight “coding”. Computational thinking is about formulating things in computational terms. Coding is about the actual mechanics of telling a computer what to do. One of our great goals with the Wolfram Language is to automate the process of coding as much as possible so people can concentrate on pure computational thinking. When one’s using lower-level languages, like C++ and Java, there’s no choice but to be involved with the detailed mechanics of coding. But with the Wolfram Language the exciting thing is that it’s possible to teach pure high-level computational thinking, without being forced to deal with the low-level mechanics of coding.

What does this mean in practice? I think it’s very empowering for students: as soon as they “get” a concept, they can immediately apply it, and do real things with it. And it was pretty neat at the Summer Camp to see how easily even students who’d never written programs before were able to express surprisingly sophisticated computational ideas in the Wolfram Language. Sometimes it seemed like students who’d learned a low-level language before were actually at a disadvantage. Though for me it was interesting a few times to witness the “aha” moment when a student realized that they didn’t have to break down their computations into tiny steps the way they’d been taught—and that they could turn some big blob of code they’d written into one simple line that they could immediately understand and extend.

Suggesting Projects

The Summer Camp program involves several hours each day of lectures and workshops aimed at bringing students up to speed with computational thinking and how to express it in the Wolfram Language. But the real core of the program is every student doing an individual, original, computational thinking project.

And, yes, this is a difficult thing to orchestrate. But over the years we’ve been doing our Summer School and Summer Camp we’ve developed a very successful way of setting this up. There are a bunch of pieces to it, and the details depend on the level of the students. But here let’s talk about high-school students, and this year’s Summer Camp.

Right before the camp we (well, actually, I) came up with a list of about 70 potential projects. Some are quite specific, some are quite open-ended, and some are more like “metaprojects” (e.g. pick a dataset in the Wolfram Data Repository and analyze it). Some are projects that could at least in some form already have been done quite a few years ago. But many projects have only just become possible—this year particularly as a result of all our recent advances in machine learning.

summer camp list

 

I tried to have a range of nominal difficulty levels for the projects. I say “nominal” because even a project that can in principle be done in an easy way can also always be done in a more elaborate and sophisticated way. I wanted to have projects that ranged from the extremely well defined and precise (implement a precise algorithm of this particular type), to ones that involved wrangling data or machine learning training, to ones that were basically free-form and where the student got to define the objective.

Many of the projects in this list might seem challenging for high-school students. But my calculation (which in fact worked out well) was that with the technology we now have, all of them are within range.

It’s perhaps interesting to compare the projects with what I suggested for this year’s Summer School. The Summer School caters to more experienced students—typically at the college, graduate school or postdoc level. And so I was able to suggest projects that require deeper mathematical or software engineering knowledge—or are just bigger, with a higher threshold to achieve a reasonable level of success.

summer school list

 

Matching Projects to Students

Before students start picking projects, it’s important that they understand what a finished project should look like, and what’s involved in doing it. So at the very beginning of the camp, the instructors went through projects from previous camps, and discussed what the “output” of a project should be. Maybe it’ll be an active website; maybe an interactive Demonstration; maybe it’ll be a research paper. It’s got to be possible to make a notebook that describes the project and its results, and to make a post about it for Wolfram Community.

After talking about the general idea of projects, and giving examples of previous ones, the instructors did a quick survey of this year’s suggestions list, filling in a few details of what the imagined projects actually were. After this, the students were asked to pick their top three projects from our list, and then invent two more potential projects of their own.

It’s always an interesting challenge to find the right project for each student—and it’s something I’ve personally been involved in at our Summer Camp for the past several years. (And, yes, it helps that I have decades of experience in organizing professional and research projects and figuring out the best people to do them.)

It’s taken us a few iterations, but here’s the approach we’ve found works well. First, we randomly break the students up into groups of a dozen or so. Then we meet with each group, going around the room and asking each student a little about themselves, their interests and goals—and their list of projects.

After we’re finished with each group, we meet separately and try to come up with a project for each student. Sometimes it’ll be one of the projects straight from our list. Sometimes it’ll be a project that the student themself suggested. And sometimes it’ll be some creative combination of these, or even something completely different based on what they said they were interested in.

After we think we’ve come up with a good project, the next step is to meet individually with each student and actually suggest it to them. It’s very satisfying that a lot of the time the students seem really enthused about the projects we end up suggesting. But sometimes it becomes clear that a project just isn’t a good fit—and then sometimes we modify it in real time, but more often we circle back later with a different suggestion.

Once the projects are set, we assign an appropriate mentor to each student, taking into account both the student and the subject of the project. And then things are off and running. We have various checkpoints, like that students have to write up descriptions of their projects and post them on the internal Summer Camp site.

I personally wasn’t involved in the actual execution of the projects (though I did have a chance to check in on a few of them). So it was pretty interesting for me to see at the end of the camp what had actually happened. It’s worth mentioning that our scheme is that mentors can make suggestions about projects, but all the final code in a project should be created by the student. And if one version of the project ends up being too difficult, it’s up to the mentor to simplify it. So however the final project comes out, it really is the student’s work.

Much of the time, the Summer Camp will be the first time students have ever done an original project. It could potentially seem daunting. But I think the fact that we give so many examples of other projects, and that everyone else at the camp is also doing a project, really helps. And in the end experiencing the whole process of going from the idea for a project to a real, finished project is incredibly educational—and seems to have a big effect on many of our students.

A Few Case Studies

OK, so that’s the theory. So what actually happened at this year’s Summer Camp? Well, here are all the projects the students did, with the titles they gave them:

final projects list

 

It’s a very interesting, impressive, and diverse list. But let me pick out a few semi-randomly to discuss in a bit more detail. Consider these as “case studies” for what high-school students can accomplish with the Wolfram Language in a couple of weeks at a summer camp.

Routing Airplanes around Mountains

One young man at our camp had quite a lot of math background, and told me he was interested in airplanes and flying, and had designed his own remote-control plane. I started thinking about all sorts of drone survey projects. But he didn’t have a drone with him—and we had to come up with a project that could actually be done in a couple of weeks. So I ended up suggesting the following: given two points on Earth, find how an airplane can get from one to the other by the shortest path that never needs to go above a certain altitude. (And, yes, a small-scale version of this would be relevant to things like drone surveying too.)

Here’s how the student did this project. First, he realized that one could think of possible flight paths as edges on a graph whose nodes are laid out on a grid on the Earth. Then he used the built-in GeoElevationData to delete nodes that couldn’t be visited because the elevation at that point was above the cutoff. Then he just used FindShortestPath to find the shortest path in the graph from the start to the end.

I thought this was a pretty clever solution. It was a nice piece of computational thinking to realize that the elements of paths could be thought of as edges on a graph with nodes removed. Needless to say, there were some additional details to get a really good result. First, the student added in diagonal connections on the grid, with appropriate weightings to still get the correct shortest path computation. And then he refined the path by successively merging line segments to better approximate a great-circle path, at each step using computational geometry to check that the path wouldn’t go through a “too-high” region.

Finding the shortest flight path

Finding Kiwi Calls

You never know what people are going to come to Summer Camp with. A young man from New Zealand came to our camp with some overnight audio recordings from outside his house featuring occasional periods of (quite strange-sounding) squawking that were apparently the calls of one or more kiwi birds. What the young man wanted to do was automatic “kiwi voice recognition”, finding the calls, and perhaps distinguishing different birds.

I said I thought this wouldn’t be a particularly easy project, but he should try it anyway. Looking at what happened, it’s clear the project started out well. It was easy to pull out all intervals in his audio that weren’t just silence. But that broke up everything, including kiwi calls, into very small blocks. He solved that by the following interesting piece of code, that uses pattern matching to combine symbolic audio objects:

Wolfram Language code for kiwi code project

At this point it might just have worked to use unsupervised machine learning and FeatureSpacePlot to distinguish kiwi from non-kiwi sound clips. But machine learning is still quite a hit-or-miss business—and in this case it wasn’t a hit. So what did the student do? Well, he built himself a tiny lightweight user interface in a notebook, then started manually classifying sound clips. (Various instructors commented that it was fortunate he brought headphones…)

After classifying 200 clips, he used Classify to automatically classify all the other clips. He did a variety of transformations to the data—applying signal processing, generating a spectrogram, etc. And in the end he got his kiwi classifier to 82% accuracy: enough to make a reasonable first pass on finding kiwi calls—and going down a path to computational ornithology.

Biomechanics of Walking and Running

One young woman said she’d recently gotten a stress fracture in her foot that she was told was related to the force she was putting on it while running. She asked if she could make a computational model of what was going on. I have to say I was pessimistic about being able to do that in two weeks—and I suggested instead a project that I thought would be more manageable, involving studying possible gaits (walk, trot, etc.) for creatures with different numbers of legs. But I encouraged her to spend a little time seeing if she could do her original project—and I suggested that if she got to the stage of actually modeling bones, she could use our built-in anatomical data.

The next I knew it was a day before the end of the Summer Camp, and I was looking at what had happened with the projects… and I was really impressed! She’d found a paper with an appropriate model, understood it, and implemented it, and now she had an interactive demonstration of the force on a foot during walking or running. She’d even used the anatomical data to show a 3D image of what was happening.

She explained that when one walks there are two peaks in the force, but when one runs, there’s only one. And when I set her interactive demonstration for my own daily walking regimen I found out that (as she said was typical) I put a maximum force of about twice my weight on my foot when I walk.

Biomechanics of Walking and Running

Banana Ripeness Classifier

At first I couldn’t tell if he was really serious… but one young man insisted he wanted to use machine learning to tell when a piece of fruit is ripe. As it happens, I had used pretty much this exact example in a blog post some time ago discussing the use of machine learning in smart contracts. So I said, “sure, why don’t you try it”. I saw the student a few times during the Summer Camp, curiously always carrying a banana. And what I discovered at the end of the camp was that that very banana was a key element of his project.

At first he searched the web for images of bananas described as “underripe”, “overripe”, etc., then arranged them using FeatureSpacePlot:

Banana ripeness

Then he realized that he could get more quantitative by first looking at where in color space the pixels of the banana image lay. The result was that he was actually able to define a “banana ripeness scale”, where, as he described it: “A value of one was assigned to bananas that were on the brink of spoilage. A value of zero was assigned to a green banana fresh off a tree. A value of 0.5 was assigned to the ‘perfect’ banana.” It’s a nice example of how something everyday and qualitative can be made computational.

For his project, the student made a “Banana Classifier” app that he deployed through the Wolfram Cloud. And he even had an actual banana to test it on!

Banana classifier

Number Internationalization

One of my suggested projects was to implement “international or historical numeral systems”—the analogs of things like Roman numerals but for different cultures and times. One young woman fluent in Korean said she’d like to do this project, starting with Korean.

As it happens, our built-in IntegerName function converts to traditional Korean numerals. So she set herself the task of converting from Korean numerals. It’s an interesting algorithmic exercise, and she solved it with some nice, elegant code.

Korean to Hindi-Arabic numerals

By that point, she was on a roll… so she decided to go on to Burmese, and Thai. She tried to figure out Burmese from web sources… only to discover they were inconsistent… with the result that she ended up contacting a person who had an educational video about Burmese numerals, and eventually unscrambled the issue, wrote code to represent it, and then corrected the Wikipedia page about Burmese numerals. All in all, a great example of real-world algorithm curation. Oh, and she set up the conversions as a Wolfram Language microsite on the web.

Is That a Joke?

Can machine learning tell if something is funny? One young man at the Summer Camp wanted to find out. So for his project he used our Reddit API connection to pull jokes from the Jokes subreddit, and (presumably) non-jokes from the AskReddit subreddit. It took a bit of cleanup and data wrangling… but then he was able to feed his training data straight into the Classify function, and generated a classifier from which he then built a website.

It’s a little hard to know how well it works outside of “Reddit-style humor”—but his anecdotal study at the Summer Camp suggested about a 90% success rate.

Is this a joke machine

Making and Checking Checksums

Different projects involve different kinds of challenges. Sometimes the biggest challenge is just to define the project precisely enough. Other times it’s to get—or clean—the data that’s needed. Still other times, it’s to find a way to interpret voluminous output. And yet other times, it’s to see just how elegantly some particular idea can be implemented.

One math-oriented young woman at the camp picked “implementing checksum algorithms” from my list. Such algorithms (used for social security numbers, credit card numbers, etc.) are very nicely and precisely defined. But how simply and elegantly can they be implemented in the Wolfram Language? It’s a good computational thinking exercise—that requires really understanding both the algorithms and the language. And for me it’s nice to be able to immediately read off from the young woman’s code just how these checksum algorithms work…

Checksums algorithm

4D Plotting on a Tesseract

How should one plot a function in 4D? I had a project in my list about this, though I have to admit I hadn’t really figured out how it should be done. But, fortunately, a young man at the Summer Camp was keen to try to work on it. And with an interesting mixture of computational and mathematical thinking, he created ParametricPlot4D—then did a bunch of math to figure out how to render the results in what seemed like two useful ways: as an orthogonal projection, and as a stereographic projection. A Manipulate makes the results interactive—and they look pretty neat…

Plotting a tesseract

State-Level Mortality

In addition to my explicit list of project suggestions, I had a “meta suggestion”: take any dataset, for example from the new Wolfram Data Repository, and try to analyze and understand it. One student took a dataset about meteorite impacts; another about the recent Ebola outbreak in Africa. One young woman said she was interested in actuarial science—so I suggested that she look at something quintessentially actuarial: mortality data.

I suggested that maybe she could look at the (somewhat macabrely named) Death Master File. I wasn’t sure how far she’d get with it. But at the end of the camp I found out that she’d processed 90 million records—and successfully reduced them to derive aggregate survival curves for 25 different states and make an interactive Demonstration of the results. (Letting me conclude, for example, that my current probability of living to age 100 is 28% higher in Massachusetts than in Indiana…)

Survival image

“OCRing” Regular Tiling

Each year when I make up a list of projects for the Summer Camp I wonder if there’ll be particular favorites. My goal is actually to avoid this, and to have as uniform a distribution of interest in the projects as possible. But this year “Use Machine Learning to Identify Polyhedra” ended up being a minor favorite. And one consequence was that a student had already started working on the project even before we’d talked to him—even though by that time the project was already assigned to someone else.

But actually the “recovery” was better than the original. Because we figured out a really nice alternative project that was very well suited to the student. The project was to take images of regular tilings, say from a book, and to derive a computational representation of them, suitable, say, for LatticeData.

The student came up with a pretty sophisticated approach, largely based on image processing, but with a dash of computational geometry, combinatorics and even some cluster analysis thrown in. First, he used fairly elaborate image processing to identify the basic unit in the tiling. Then he figured out how this unit was arranged to form the final tiling. It ended up being about 102 lines of fairly dense algorithmic code—but the result was a quite robust “tiling OCR” system, that he also deployed on the web.

Analyzing regular tilings

Finding Buildings in Satellite Images

In my list I had a project “Identify buildings from satellite images”. A few students thought it sounded interesting, but as I thought about it some more, I got concerned that it might be really difficult. Still, one of our students was a capable young man who already seemed to know a certain amount about machine learning. So I encouraged him to give it a try. He ended up doing an impressive job.

He started by getting training data by comparing satellite images with street maps that marked buildings (and, conveniently, starting with the upcoming version of the Wolfram Language, not only streets maps but also satellite images are built in):

Buildings

Then he used NetChain to build a neural net (based on the classic LeNet network, but modified). And then he started trying to classify parts of images as “building” or “not building”.

FinalNetUpdate3[ ]

The results weren’t at all bad. But so far they were only answering the question “is there a building in that square?”, not “where is there a building?”. So then—in a nice piece of computational thinking—the student came up with a further idea: just have a window pan across the image, at each step estimating the probability of building vs. not-building. The result was a remarkably accurate heat map of where buildings might be.

TempMapDensity[ , FinalNetUpdate3, 60, 20]

It’d be a nice machine learning result for anyone. But as something done by a high-school student in two weeks I think it’s really impressive. And another great example of what’s now possible at an educational level with our whole Wolfram Language technology stack.

Beyond the Summer Camp

OK, so our Summer Camp was a success, and, with luck, the students from it are now successfully “launched” as independent computational thinkers. (The test, as far as I’m concerned, is whether when confronted with something in their education or their lives, they routinely turn to computational thinking, and just “write a program to solve the problem”. I’m hopeful that many of them now will. And, by the way, they immediately have “marketable skills”—like being able to do all sorts of data-science-related things.)

But how can we scale up what we’ve achieved with the Summer Camp? Well, we have a whole Computational Thinking Initiative that we’ve been designing to do just that. We’ll be rolling out different parts over the next little while, but one aspect will be doing other camps, and enabling other people to also do camps.

We’ve now got what amounts to an operations manual for how to “do a camp”. But suffice it to say that the core of it is to have instructors with good knowledge of the Wolfram Language (e.g. to the level of our Certified Instructor program), access to a bunch of great students, and use of a suitable venue. Two weeks seems to be a good length, though longer would work too. (Shorter will probably not be sufficient for students without prior experience to get to the point of doing a real project.)

Our camp is for high-school students (mainly aged 15 through 17). I think it would also be possible to do a successful camp for advanced middle-school students (maybe aged 12 and 13). And, of course, our long-running Summer School provides a very successful model for older students.

Beyond camps, we’ve had for some time a mentorships program which we will be streamlining and scaling up—helping students to work on longer-term projects. We’re also planning a variety of events and venues in which students can showcase their computational thinking work.

But for now it’s just exciting to see what was achieved in two weeks at this year’s Summer Camp. Yes, with the tech stack we now have, high-school students really can do serious computational thinking—that will make them not only immediately employable, but also positioned for what I think will be some of the most interesting career directions of the next few decades.


To comment, please visit the copy of this post at the Stephen Wolfram Blog »

]]>
0
Roger Germundsson <![CDATA[Announcing SystemModeler 5: Symbolic Parametric Simulation, Modular Reconfigurability and 200 New Built-in Components]]> http://blog.internal.wolfram.com/?p=37362 2017-07-25T20:05:25Z 2017-07-25T17:06:02Z New in SystemModeler

Our goal with SystemModeler is to provide a state-of-the-art environment for modeling, simulation—and analytics—that leverages the Wolfram technology stack and builds on the Modelica standard for systems description (that we helped to develop).

SystemModeler is routinely used by the world’s engineering organizations on some of the world’s most complex engineering systems—as well as in fields such as life sciences and social science. We’ve been pursuing the development of what is now SystemModeler for more than 15 years, adding more and more sophistication to the capabilities of the system. And today we’re pleased to announce the latest step forward: SystemModeler 5.

As part of the 4.1, 4.2, 4.3 sequence of releases, we completely rebuilt and modernized the core computational kernel of SystemModeler. Now in SystemModeler 5, we’re able to build on this extremely strong framework to add a whole variety of new capabilities.

Some of the headlines include:

  • Support for continuous media such as fluids and gases, using the latest Modelica libraries
  • Almost 200 additional Modelica components, including Media, PowerConverters and Noise libraries
  • Complete visual redesign of almost 6000 icons, for consistency and improved readability
  • Support for new GUI workspaces optimized for different levels of development and presentation
  • Almost 500 built-in example models for easy exploration and learning
  • Modular reconfigurability, allowing different parts of models to be easily switched and modified
  • Symbolic parametric simulation: the ability to create a fully computable object representing variations of model parameters
  • Importing and exporting FMI 2 models for broad model interchange and system integration

Latest Modelica Libraries

A modeling project is greatly simplified if there is a library available for the topic. A library essentially provides the modeling language for that domain, consisting of components, sensors, sources and interfaces. Using these elements, building a model typically consists of dragging components, sensors and sources into a model space and then connecting their interfaces, as in this video:

SystemModeler already comes with an amazing collection of libraries for different domains, such as electrical (analog, digital, power), mechanical (1- or 3-dimensional), thermal (heat transfer, thermo-fluid flow), etc. And with the SystemModeler Library Store, there are many free and paid libraries that add to this collection.

With SystemModeler 5, we provide the latest version of the Modelica Standard Library (3.2.2) that we helped develop with industry and academic partners, adding almost 200 new components. Some of the new libraries include Media, PowerConverters and Noise, as well as several other sublibraries and utilities.

The Media library is technically advanced and required major updates to the SystemModeler kernel, containing models for the behavior of common gases and liquids. They range from ideal one-component gases to multicomponent mediums with phase transitions and nonlinear effects.

Let’s look at a basic example: have you ever noticed how when you use a compressed air duster the temperature of the can seems to drop rapidly? The following is a model of a 1 liter can at 75 psi and room temperature with a nozzle that restricts the flow out of the can into an ambient environment at atmospheric pressure.

Air duster with model

The temperature of the three canister parts above is dependant on the medium inside each of them. If you want to analyze how the canister behaves with a different gas, all individual components would need to change to reflect this. In SystemModeler 5, we have made it so that you can reconfigure the whole model instead, setting one value to switch out all the parts at once.

Here, two different gases with identical starting temperatures and pressures are shown. If you compare a canister containing normal air to one containing helium gas, you can see that the much denser air retains its temperature better than the less dense helium gas.

Ideal Helium graph

A similar example modeling the stresses caused by expanding gases in a tank can be found here.

New Icons and Workspaces

A great feature when building models using the drag-drop-connect paradigm is that the resulting model diagram is an effective way to communicate and understand models—whether that is for presentations and reports or when interactively exploring and learning about a model.

SystemModeler diagrams are now even more effective for visual communication, as a result of the redesign of nearly 6000 icons for improved consistency and readability. See this video for a short overview:

SystemModeler icons

In addition, we have designed GUI workspaces that are optimized for different scenarios from presenter to developer. The main difference is the amount of tools and information panels that are readily available. What is essential for advanced development is mostly clutter when presenting or exploring, for instance.

GUI workspaces

Reconfigurable Models

It is often very convenient to provide a single interface to reconfigure multiple aspects of a complex model. This makes it easy to provide a few major model scenarios, for instance. SystemModeler 5 fully supports reconfigurable models, including interactive support for configuration management, which is an industry first.

Let’s use the wheels of a car as an example: say you want to test how different tires perform in a tight corner on a slippery surface. Instead of changing each of the tire model components, we can simply select the desired model configuration from a drop-down menu.

Car tire model

We have gone from Bambi on ice to Formula 1 performance. To understand these tracks, see this video:

Download this example and try it out for yourself! Now, if only changing the tires on a real car were as quick.

Parametric Simulation

When you build a model, you typically want it to have parameters that can be tuned or fitted. With SystemModeler 5, you can now immediately explore and optimize different parameter values efficiently, using the function WSMParametricSimulateValue.

For instance, in this example, we are considering rope length and release time for a medieval trebuchet. And using optimization functions, we can find the the optimal parameter values that maximize the range of this ancient war machine. The “value” for this system is a whole trajectory, some of which you can see below. Notice that if you release the stone at the wrong time, the trajectory is actually going in the wrong direction (colored red below).

Trebuchet graph

Using the function WSMParametricSimulateValue, you can also efficiently provide for interactive exploration of parameter spaces. Let us continue with the previous example. Say you want to analyze the turning car further and look at a few parameters in greater detail; of particular interest could be to explore how speed and road conditions such as friction and turning radius affect the car’s ability to follow the desired trajectory.

parametricSim =   WSMParametricSimulateValue[   "SlidingCar.Car", {"LeftRoadXPos", "LeftRoadXYos", "RightRoadXPos",     "RightRoadYPos", "CarXPos", "CarYPos"}, {"throttle",     "turningRadius", "friction"}, WSMProgressMonitor -> False]

SystemModeler parametric function

The parametric simulate function can then be used in a Manipulate.

Manipulate[  Module[{simRes = parametricSim[throttle, turnRadius, friction]},    With[{res = ArrayReshape[Evaluate[Through[simRes[t]]], {3, 2}]},     ParametricPlot[     res, {t, 0, 15}, {RGBColor[0., 0., 0.], RGBColor[0., 0., 0.],       RGBColor[0.368417, 0.506779, 0.709798]}, PlotRange -> All,      AspectRatio -> Automatic]]], {{throttle, 15, "Throttle"}, 0,    50}, {{turnRadius, 0.2, "Turning radius"}, 0,    0.2}, {{friction, 0.9, "Road Conditions"}, {0.9 -> "Asphalt",     0.5 -> "Gravel", 0.05 -> "Ice"}}, ContinuousAction -> False]

Model manipulate

Model Exchange with FMI

The FMI (functional mock-up interface) standard is a broad industry standard for model exchange between simulation and system integration tools. The standard was originally proposed by Daimler and has been developed by a broad group of industry and academic partners, including us, over the course of several years. Typical use cases include:

  • To import and export models between different tools, allowing different teams and companies to cooperate loosely
  • To package models as a way to protect intellectual property, whether they rely on different tools or not
  • To package models for system integration and perform hardware-in-the loop simulation

SystemModeler 5 now fully supports both FMI 1.0 and FMI 2.0 for model import and export, and with some 100 tools listed as supporting or planning support for this standard, this is by far the easiest way to integrate workflows between different tools, teams and companies.

Getting back to the car example, it is clearly struggling on the slippery surface. You could try to see if you can improve the cornering by adding an antilock braking system (ABS) to the model. However, you probably will not be able to get your hands on open source code for such a system, as they are likely to be proprietary. However, we can import an FMU (functional mock-up unit)—the actual objects exchanged in the FMI standard—of the ABS system.

Functional mockup unit

By importing an FMU of the ABS controller, you can connect it like any other component. In the following simulation, the driver tries to steer to the right while slamming on the brakes. Without ABS, the wheels quickly lock up and the car will keep heading straight ahead. The ABS will, however, employ cadence braking, preventing the wheels from locking up and allowing the car to steer to the right.

FMI comparison graph

Explore Further

See What’s New for all the new features with examples. See features and examples for a more comprehensive presentation of SystemModeler as a modeling, simulation and analytics environment. To get going right away, get the trial here. If you are new to SystemModeler, get started with these videos.

]]>
2
Stephen Wolfram http:// <![CDATA[The Practical Business of Ontology: A Tale from the Front Lines]]> http://blog.internal.wolfram.com/?p=37344 2017-07-20T19:23:30Z 2017-07-19T19:08:40Z

The Philosophy of Chemicals

“We’ve just got to decide: is a chemical like a city or like a number?” I spent my day yesterday—as I have for much of the past 30 years—designing new features of the Wolfram Language. And yesterday afternoon one of my meetings was a fast-paced discussion about how to extend the chemistry capabilities of the language.

At some level the problem we were discussing was quintessentially practical. But as so often turns out to be the case for things we do, it ultimately involves some deep intellectual issues. And to actually get the right answer—and to successfully design language features that will stand the test of time—we needed to plumb those depths, and talk about things that usually wouldn’t be considered outside of some kind of philosophy seminar.

Thinker

Part of the issue, of course, is that we’re dealing with things that haven’t really ever come up before. Traditional computer languages don’t try to talk directly about things like chemicals; they just deal with abstract data. But in the Wolfram Language we’re trying to build in as much knowledge about everything as possible, and that means we have to deal with actual things in the world, like chemicals.

We’ve built a whole system in the Wolfram Language for handling what we call entities. An entity could be a city (like New York City), or a movie, or a planet—or a zillion other things. An entity has some kind of name (“New York City”). And it has definite properties (like population, land area, founding date, …).

We’ve long had a notion of chemical entities—like water, or ethanol, or tungsten carbide. Each of these chemical entities has properties, like molecular mass, or structure graph, or boiling point.

And we’ve got many hundreds of thousands of chemicals where we know lots of properties. But all of these are in a sense concrete chemicals: specific compounds that we could put in a test tube and do things with.

But what we were trying to figure out yesterday is how to handle abstract chemicals—chemicals that we just abstractly construct, say by giving an abstract graph representing their chemical structures. Should these be represented by entities, like water or New York City? Or should they be considered more abstract, like lists of numbers, or, for that matter, mathematical graphs?

Well, of course, among the abstract chemicals we can construct are chemicals that we already represent by entities, like sucrose or aspirin or whatever. But here there’s an immediate distinction to make. Are we talking about individual molecules of sucrose or aspirin? Or about these things as bulk materials?

At some level it’s a confusing distinction. Because, we might think, once we know the molecular structure, we know everything—it’s just a matter of calculating it out. And some properties—like molar mass—are basically trivial to calculate from the molecular structure. But others—like melting point—are very far from trivial.

OK, but is this just a temporary problem that one shouldn’t base a long-term language design on? Or is it something more fundamental that will never change? Well, conveniently enough, I happen to have done a bunch of basic science that essentially answers this: and, yes, it’s something fundamental. It’s connected to what I call computational irreducibility. And for example, the precise value of, say, the melting point for an infinite amount of some material may actually be fundamentally uncomputable. (It’s related to the undecidability of the tiling problem; fitting in tiles is like seeing how molecules will arrange to make a solid.)

So by knowing this piece of (rather leading-edge) basic science, we know that we can meaningfully make a distinction between bulk versions of chemicals and individual molecules. Clearly there’s a close relation between, say, water molecules, and bulk water. But there’s still something fundamentally and irreducibly different about them, and about the properties we can compute for them.

At Least the Atoms Should Be OK

Alright, so let’s talk about individual molecules. Obviously they’re made of atoms. And it seems like at least when we talk about atoms, we’re on fairly solid ground. It might be reasonable to say that any given molecule always has some definite collection of atoms in it—though maybe we’ll want to consider “parametrized molecules” when we talk about polymers and the like.

But at least it seems safe to consider types of atoms as entities. After all, each type of atom corresponds to a chemical element, and there are only a limited number of those on the periodic table. Now of course in principle one can imagine additional “chemical elements”; one could even think of a neutron star as being like a giant atomic nucleus. But again, there’s a reasonable distinction to be made: almost certainly there are only a limited number of fundamentally stable types of atoms—and most of the others have ridiculously short lifetimes.

There’s an immediate footnote, however. A “chemical element” isn’t quite as definite a thing as one might imagine. Because it’s always a mixture of different isotopes. And, say, from one tungsten mine to another, that mixture might change, giving a different effective atomic mass.

And actually this is a good reason to represent types of atoms by entities. Because then one just has to have a single entity representing tungsten that one can use in talking about molecules. And only if one wants to get properties of that type of atom that depend on qualifiers like which mine it’s from does one have to deal with such things.

In a few cases (think heavy water, for example), one will need to explicitly talk about isotopes in what is essentially a chemical context. But most of the time, it’s going to be enough just to specify a chemical element.

To specify a chemical element you just have to give its atomic number Z. And then textbooks will tell you that to specify a particular isotope you just have to say how many neutrons it contains. But that ignores the unexpected case of tantalum. Because, you see, one of the naturally occurring forms of tantalum (180mTa) is actually an excited state of the tantalum nucleus, which happens to be very stable. And to properly specify this, you have to give its excitation level as well as its neutron count.

In a sense, though, quantum mechanics saves one here. Because while there are an infinite number of possible excited states of a nucleus, quantum mechanics says that all of them can be characterized just by two discrete values: spin and parity.

Every isotope—and every excited state—is different, and has its own particular properties. But the world of possible isotopes is much more orderly than, say, the world of possible animals. Because quantum mechanics says that everything in the world of isotopes can be characterized just by a limited set of discrete quantum numbers.

We’ve gone from molecules to atoms to nuclei, so why not talk about particles too? Well, it’s a bigger can of worms. Yes, there are the well-known particles like electrons and protons that are pretty easy to talk about—and are readily represented by entities in the Wolfram Language. But then there’s a zoo of other particles. Some of them—just like nuclei—are pretty easy to characterize. You can basically say things like: “it’s a particular excited state of a charm-quark-anti-charm-quark system” or some such. But in particle physics one’s dealing with quantum field theory, not just quantum mechanics. And one can’t just “count elementary particles”; one also has to deal with the possibility of virtual particles and so on. And in the end the question of what kinds of particles can exist is a very complicated one—rife with computational irreducibility. (For example, what stable states there can be of the gluon field is a much more elaborate version of something like the tiling problem I mentioned in connection with melting points.)

Maybe one day we’ll have a complete theory of fundamental physics. And maybe it’ll even be simple. But exciting as that will be, it’s not going to help much here. Because computational irreducibility means that there’s essentially an irreducible distance between what’s underneath, and what phenomena emerge.

And in creating a language to describe the world, we need to talk in terms of things that can actually be observed and computed about. We need to pay attention to the basic physics—not least so we can avoid setups that will lead to confusion later. But we also need to pay attention to the actual history of science, and actual things that have been measured. Yes, there are, for example, an infinite number of possible isotopes. But for an awful lot of purposes it’s perfectly useful just to set up entities for ones that are known.

The Space of Possible Chemicals

But is it the same in chemistry? In nuclear physics, we think we know all the reasonably stable isotopes that exist—so any additional and exotic ones will be very short-lived, and therefore probably not important in practical nuclear processes. But it’s a different story in chemistry. There are tens of millions of chemicals that people have studied (and, for example, put into papers or patents). And there’s really no limit on the molecules that one might want to consider, and that might be useful.

But, OK, so how can we refer to all these potential molecules? Well, in a first approximation we can specify their chemical structures, by giving graphs in which every node is an atom, and every edge is a bond.

What really is a “bond”? While it’s incredibly useful in practical chemistry, it’s at some level a mushy concept—some kind of semiclassical approximation to a full quantum mechanical story. There are some standard extra bits: double bonds, ionization states, etc. But in practice chemistry is very successfully done just by characterizing molecular structures by appropriately labeled graphs of atoms and bonds.

OK, but should chemicals be represented by entities, or by abstract graphs? Well, if it’s a chemical one’s already heard of, like carbon dioxide, an entity seems convenient. But what if it’s a new chemical that’s never been discussed before? Well, one could think about inventing a new entity to represent it.

Any self-respecting entity, though, better have a name. So what would the name be? Well, in the Wolfram Language, it could just be the graph that represents the structure. But maybe one wants something that seems more like an ordinary textual name—a string. Well, there’s always the IUPAC way of naming chemicals with names like 1,1′-{[3-(dimethylamino)propyl]imino}bis-2-propanol. Or there’s the more computer-friendly SMILES version: CC(CN(CCCN(C)C)CC(C)O)O. And whatever underlying graph one has, one can always generate one of these strings to represent it.

There’s an immediate problem, though: the string isn’t unique. In fact, however one chooses to write down the graph, it can’t always be unique. A particular chemical structure corresponds to a particular graph. But there can be many ways to draw the graph—and many different representations for it. And in fact even the (“graph isomorphism”) problem of determining whether two representations correspond to the same graph can be difficult to solve.

What Is a Chemical in the End?

OK, so let’s imagine we represent a chemical structure by a graph. At first, it’s an abstract thing. There are atoms as nodes in the graph, but we don’t know how they’d be arranged in an actual molecule (and e.g. how many angstroms apart they’d be). Of course, the answer isn’t completely well defined. Are we talking about the lowest-energy configuration of the molecule? (What if there are multiple configurations of the same energy?) Is the molecule supposed to be on its own, or in water, or whatever? How was the molecule supposed to have been made? (Maybe it’s a protein that folded a particular way when it came off the ribosome.)

Well, if we just had an entity representing, say, “naturally occurring hemoglobin”, maybe we’d be better off. Because in a sense that entity could encapsulate all these details.

But if we want to talk about chemicals that have never actually been synthesized it’s a bit of a different story. And it feels as if we’d be better off just with an abstract representation of any possible chemical.

But let’s talk about some other cases, and analogies. Maybe we should just treat everything as an entity. Like every integer could be an entity. Yes, there are an infinite number of them. But at least it’s clear what names they should be given. With real numbers, things are already messier. For example, there’s no longer the same kind of uniqueness as with integers: 0.99999… is really the same as 1.00000…, but it’s written differently.

What about sequences of integers, or, for that matter, mathematical formulas? Well, every possible sequence or every possible formula could conceivably be a different entity. But this wouldn’t be particularly useful, because much of what one wants to do with sequences or formulas is to go inside them, and transform their structure. But what’s convenient about entities is that they’re each just “single things” that one doesn’t have to “go inside”.

So what’s the story with “abstract chemicals”? It’s going to be a mixture. But certainly one’s going to want to “go inside” and transform the structure. Which argues for representing the chemical by a graph.

But then there’s potentially a nasty discontinuity. We’ve got the entity of carbon dioxide, which we already know lots of properties about. And then we’ve got this graph that abstractly represents the carbon dioxide molecule.

We might worry that this would be confusing both to humans and programs. But the first thing to realize is that we can distinguish what these two things are representing. The entity represents the bulk naturally occurring version of the chemical—whose properties have potentially been measured. The graph represents an abstract theoretical chemical, whose properties would have to be computed.

But obviously there’s got to be a bridge. Given a concrete chemical entity, one of the properties will be the graph that represents the structure of the molecule. And given a graph, one will need some kind of ChemicalIdentify function, that—a bit like GeoIdentify or maybe ImageIdentify—tries to identify from the graph what chemical entity (if any) has a molecular structure that corresponds to that graph.

Philosophy Meets Chemistry Meets Math Meets Physics…

As I write out some of the issues, I realize how complicated all this may seem. And, yes, it is complicated. But in our meeting yesterday, it all went very quickly. Of course it helps that everyone there had seen similar issues before: this is the kind of thing that’s all over the foundations of what we do. But each case is different.

And somehow this case got a bit deeper and more philosophical than usual. “Let’s talk about naming stars”, someone said. Obviously there are nearby stars that we have explicit names for. And some other stars may have been identified in large-scale sky surveys, and given identifiers of some kind. But there are lots of stars in distant galaxies that will never have been named. So how should we represent them?

That led to talking about cities. Yes, there are definite, chartered cities that have officially been assigned names–and we probably have essentially all of these right now in the Wolfram Language, updated regularly. But what about some village that’s created for a single season by some nomadic people? How should we represent it? Well, it has a certain location, at least for a while. But is it even a definite single thing, or might it, say, devolve into two villages, or not a village at all?

One can argue almost endlessly about identity—and even existence—for many of these things. But ultimately it’s not the philosophy of such things that we’re interested in: we’re trying to build software that people will find useful. And so what matters in the end is what’s going to be useful.

Now of course that’s not a precise thing to know. But it’s like for language design in general: think of everything people might want to do, then see how to set up primitives that will let people do those things. Does one want some chemicals represented by entities? Yes, that’s useful. Does one want a way to represent arbitrary chemical structures by graphs? Yes, that’s useful.

But to see what to actually do, one has to understand quite deeply what’s really being represented in each case, and how everything is related. And that’s where the philosophy has to meet the chemistry, and the math, and the physics, and so on.

I’m happy to say that by the end of our hour-long meeting yesterday (informed by about 40 years of relevant experience I’ve had, and collectively 100+ years from people in the meeting), I think we’d come up with the essence of a really nice way to handle chemicals and chemical structures. It’s going to be a while before it’s all fully worked out and implemented in the Wolfram Language. But the ideas are going to help inform the way we compute and reason about chemistry for many years to come. And for me, figuring out things like this is an extremely satisfying way to spend my time. And I’m just glad that in my long-running effort to advance the Wolfram Language I get to do so much of it.


To comment, please visit the copy of this post at the Stephen Wolfram Blog »

]]>
0
John Moore <![CDATA[Books from around the (Wolfram) World!]]> http://blog.internal.wolfram.com/?p=37181 2017-07-07T16:58:44Z 2017-07-07T16:40:03Z We’re always excited to see what new things people have created using Wolfram technologies. As the broad geographical distribution of Wolfram Community contributors illustrates, people all over the world are doing great things with the Wolfram Language. In this vein, today we want to highlight some recent books written in languages other than English that utilize Wolfram technologies. From engineering to statistics, these books provide valuable information for those looking to dig a little deeper into scientific applications of the Wolfram Language.

Recent Mathematica books

MATHEMATICA kompakt: Mathematische Problemlösungen für Ingenieure, Mathematiker und Naturwissenschaftler (German)

Hans Benker provides a brief and accessible introduction to Mathematica and shows its applications in problems of engineering mathematics, discussing the construction, operation and possibilities of the Wolfram Language in detail. He explores Mathematica usage for matrices and differential and integral calculus. The last part of the book is devoted to the advanced topics of engineering mathematics, including differential equations, transformations, optimization, probability and statistics. The calculations are all presented in detail and are illustrated by numerous examples.

Exploración de modelos matemáticos usando Mathematica (Spanish)

This book explores mathematical models that are traditionally studied in courses on differential equations, but from a unique perspective. The authors analyze models by modifying their initial parameters, transforming them into problems that would be practically impossible to solve in an analytical way. Mathematica provides an essential computational platform for solving these problems, particularly when they are graphical in nature.

Statistisk formelsamling: med bayesiansk vinkling (Norwegian)

Svein Olav Nyberg provides an undergraduate-level statistical formulary with support for Mathematica. This volume includes basic formulas for Bayesian techniques, as well as for general basic statistics. It is an essential primer for Norwegian-language students working in statistical analysis.

Recent Russian Mathematica textbooks

Funktsii kompleksnoy peremennoy, ryady i operatsionnoe ischislenie v zadachah i primerah v Mathematica: uchebnoe posobie (Russian)

Computational thinking is an increasingly necessary technique for problem solving in a range of disciplines, and Mathematica and the Wolfram Language equip students with a powerful computational tool. Approaching calculus from this perspective, K. V. Titov and N. D. Gorelov’s textbook provides a helpful introduction to using the Wolfram Language in the mathematics classroom.

Kompyuternaya matematika: uchebnoe posobie (Russian)

Another textbook from K. V. Titov, Kompyuternaya matematika: uchebnoe posobie emphasizes the use of computer technologies for mathematical analyses and offers practical solutions for numerous problems in various fields of science and technology, as well as their engineering applications. Titov discusses methodological approaches to problem solving in order to promote the development and application of online resources in education and to help integrate computer mathematics in educational technology.

These titles are just a sampling of the many books that explore applications of the Wolfram Language. You can find more Wolfram technologies books, both in English and other languages, by visiting the Wolfram Books site.

]]>
0
Swede White <![CDATA[Analyzing Social Networks of Colonial Boston Revolutionaries with the Wolfram Language]]> http://blog.internal.wolfram.com/?p=37053 2017-06-29T17:50:26Z 2017-06-29T17:15:50Z Revolutionary social networks lead image

As the Fourth of July approaches, many in America will celebrate 241 years since the founders of the United States of America signed the Declaration of Independence, their very own disruptive, revolutionary startup. Prior to independence, colonists would celebrate the birth of the king. However, after the Revolutionary War broke out in April of 1775, some colonists began holding mock funerals of King George III. Additionally, bonfires, celebratory cannon and musket fire and parades were common, along with public readings of the Declaration of Independence. There was also rum.

Today, we often celebrate with BBQ, fireworks and a host of other festivities. As an aspiring data nerd and a sociologist, I thought I would use the Wolfram Language to explore the Declaration of Independence using some basic natural language processing.

Using metadata, I’ll also explore a political network of colonists with particular attention paid to Paul Revere, using built-in Wolfram Language functions and network science to uncover some hidden truths about colonial Boston and its key players leading up to the signing of the Declaration of Independence.

The Declaration of Independence and the Wolfram Data Repository

The Wolfram Data Repository was recently announced and holds a growing collection of interesting resources for easily computable results.

Wolfram Data Repository

As it happens, the Wolfram Data Repository includes the full text of the Declaration of Independence. Let’s explore the document using WordCloud by first grabbing it from the Data Repository.

doi = ResourceData["Declaration of Independence"];

WordCloud[DeleteStopwords@doi]

Interesting, but this isn’t very patriotic thematically, so let’s use ColorFunction and then use DeleteStopwords to remove the signers of the document.

WordCloud[  DeleteStopwords@   StringDelete[    ToLowerCase[doi], {"john", "thomas", "george", "samuel",      "francis", "lewis", "richard", "james", "morris", "benjamin",      "adams", "william", "jr.", "lee", "abraham"}],   FontFamily -> "Zapfino", ColorFunction -> "SolarColors"]

As we can see, the Wolfram Language has deleted the names of the signers and made words larger as a function of their frequency in the Declaration of Independence. What stands out is that the words “laws” and “people” appear the most frequently. This is not terribly surprising, but let’s look at the historical use of those words using the built-in WordFrequencyData functionality and DateListPlot for visualization. Keeping with a patriotic theme, let’s also use PlotStyle to make the plot red and blue.

DateListPlot[WordFrequencyData[{"laws", "people"}, "TimeSeries"],   PlotStyle -> {Red, Blue}, FrameTicks -> {True, False}]

What is incredibly interesting is that we can see a usage spike around 1776 in both words. The divergence between the use of the two words over time also strikes me as interesting.

A Social Network of Colonial Boston

According to historical texts, colonial Boston was a fascinating place in the late 18th century. David Hackett Fischer’s monograph Paul Revere’s Ride paints a comprehensive picture of the political factions that were driving the revolutionary movement. Of particular interest are the Masonic lodges and caucus groups that were politically active and central to the Revolutionary War.

Those of us raised in the United States will likely remember Paul Revere from our very first American history classes. He famously rode a horse through what is now the greater Boston area warning the colonial militia of incoming British troops, known as his “midnight ride,” notably captured in a poem by Henry Wadsworth Longfellow in 1860.

Up until Fischer’s exploration of Paul Revere’s political associations and caucus memberships, historians argued the colonial rebel movement was controlled by high-ranking political elites led by Samuel Adams, with many concluding Revere was simply a messenger. That he was, but through that messaging and other activities, he was key to joining together political groups that otherwise may not have communicated, as I will show through network analysis.

As it happens, this time last year I was at the Wolfram Summer School, which is currently in progress at Bentley University. One of the highlights of my time there was a lecture on social network analysis, led by Charlie Brummitt, that used metadata to analyze colonial rebels in Boston.

Duke University sociologist Kieran Healy has a fantastic blog post exploring this titled “Using Metadata to Find Paul Revere” that the lecture was derived from. I’m going to recreate some of his analysis with the Wolfram Language and take things a bit further with more advanced visualizations.

“Remember the ladies”

First, however, as a sociologist, my studies and research are often concerned with inequalities, power and marginalized groups. I would be remiss if I did not think of Abigail Adams’s correspondence with her husband John Adams on March 31, 1776, in which she instructed him to “remember the ladies” at the proceedings of the Continental Congress. I made a WordCloud of the letter here.

Adams word cloud

The data we are using is exclusively about men and membership data from male-only social and political organizations. It is worth noting that during the Revolutionary period, and for quite a while following, women were legally barred from participating in most political affairs. Women could vote in some states, but between 1777 and 1787, those rights were stripped in all states except New Jersey. It wasn’t until August 18, 1920, that the 19th Amendment passed, securing women’s right to vote unequivocally.

To that end, under English common law, women were treated as femes covert, meaning married women’s rights were absorbed by their husbands. Not only were women not allowed to vote, coverture laws dictated that a husband and wife were one person, with the former having sole political decision-making authority, as well as the ability to buy and sell property and earn wages.

Following the American Revolution, the United States was free from the tyranny of King George III; however, women were still subservient to men legally and culturally. For example, Hannah Griffitts, a poet known for her work about the Daughters of Liberty, “The Female Patriots,” expressed in a 1785 diary entry sentiments common among many colonial women:

The glorious fourth—again appears
A Day of Days—and year of years,
    The sum of sad disasters,
Where all the mighty gains we see
With all their Boasted liberty,
    Is only Change of Masters.

There is little doubt that without the domestic and emotional labor of women, often invisible in history, these men, the so-called Founding Fathers, would have been less successful and expedient in achieving their goals of independence from Great Britain. So today, we remember the ladies, the marginalized and the disenfranchised.

Political Groups of Colonial Boston: Obtaining the Data and Exploratory Analysis

Conveniently, I uploaded a cleaned association matrix of political group membership in colonial Boston as a ResourceObject to the Data Repository. We’ll import with ResourceData to give us a nice data frame to work with.

PaulRevereData =    ResourceData["Paul Revere's Social Network in Colonial Boston"];

colonistsNames = Normal@PaulRevereData[All, "Name"];

Length[colonistsNames]

We can see we have 254 colonists in our dataset. Let’s take a look at which colonial rebel groups Samuel Adams was a member of, as he’s known in contemporary times for a key ingredient in Fourth of July celebrations, beer.

PaulRevereData@SelectFirst[#["Name"] == "Samuel Adams" &]

Our True/False values indicate membership in one of seven political organizations: St. Andrews Lodge, Loyal Nine, North Caucus, the Long Room Club, the Tea Party, the Boston Committee of Correspondence and the London Enemies.

We can see Adams was a member of four of these. Let’s take a look at Revere’s memberships.

PaulRevereData@SelectFirst[#["Name"] == "Paul Revere" &]

As we can see, Revere was slightly more involved, as he is a member of five groups. We can easily graph his membership in these political organizations. For those of you unfamiliar with how a network functions, nodes represent agents and the lines between them represent some sort of connection, interaction or association.

lodges = Normal@Rest[Keys[First[PaulRevereData]]]; With[{g = Flatten[Normal[With[{row = #, name = #Name},         If[row[#], name <-> #, Nothing] & /@ lodges] & /@        PaulRevereData]]},   HighlightGraph[g,   NeighborhoodGraph[g, "Paul Revere", 1, VertexLabels -> Automatic],    GraphLayout -> "RadialDrawing", VertexLabels -> Automatic,    VertexLabelStyle -> Background -> White, ImageSize -> Large]]

There are seven organizations in total, so let’s see how they are connected by highlighting political organizations as red nodes, with individuals attached to each node.

HighlightGraph[Flatten[Normal[With[{row = #, name = #Name},       If[row[#], name <-> #, Nothing] & /@ lodges] & /@      PaulRevereData]], lodges, VertexLabels -> Placed["Name", Top],   ImageSize -> Large, VertexLabelStyle -> Background -> White,   VertexSize -> 3]

We can see the Tea Party and St. Andrews Lodge have many more members than Loyal Nine and others, which we will now explore further at the micro level.

Network of Individuals in Political Organizations: Closeness and Centrality

What we’ve done so far is fairly macro and exploratory. Let’s drill down by looking at each individual’s connection to one another by way of shared membership in these various groups. Essentially, we are removing our political organization nodes and focusing on individual colonists. We’ll use Tooltip to help us identify each actor in the network.

bipartiteAdjacencyMatrix = Boole@Normal[PaulRevereData[Values, Rest]]; edges = ReplacePart[    bipartiteAdjacencyMatrix.Transpose[bipartiteAdjacencyMatrix], {i_,       i_} -> 0]; personPersonGraph = AdjacencyGraph[colonistsNames, edges,   EdgeStyle -> {Opacity[.1]}, GraphLayout -> "RadialDrawing",    ImageSize -> Large, VertexSize -> Automatic,    VertexLabels -> Placed["Name", Tooltip],    PlotLabel -> "Colonist Network"]

We now use a social network method called BetweennessCentrality that measures the centrality of an agent in a network. It is the fraction of shortest paths between pairs of other agents that pass through that agent. Since the actor can broker information between the other agents, for example, this measure becomes key in determining the importance of a particular node in the network by measuring how a node lies between pairs of actors with nothing lying between a node and other actors.

We’ll first create a function that will allow us to visualize not only BetweennessCentrality, but also EigenvectorCentrality and ClosenessCentrality.

HighlightCentrality[g_, cc_] :=   HighlightGraph[g,    Table[Style[VertexList[g][[i]],      ColorData["TemperatureMap"][cc[[i]]/Max[cc]]], {i,      VertexCount[g]}]]

We begin with some brief code for BetweennessCentrality that uses the defined ColorData feature to show us which actors have the highest ability to transmit resources or information through the network, along with the Tooltip that was previously defined.

HighlightCentrality[personPersonGraph,   BetweennessCentrality[personPersonGraph]]

Lo and behold, Paul Revere appears to have a vastly higher betweenness score than anyone else in the network. Significantly, John Adams is at the center of our radial graph, but he does not appear to have much power in the network. Let’s grab the numbers.
TopFive[measure_, heading_] :=   TableForm[   TakeLargestBy[Transpose[{colonistsNames, measure}], #[[2]] &, 5],    TableHeadings -> {None, {"Colonist Rebel", heading}}]

betweenness = BetweennessCentrality[personPersonGraph]; TopTen[betweenness, "Betweenness Centrality"]

Revere has almost double the score of the next highest colonist, Thomas Urann. What this indicates is Revere’s essential importance in the network as a broker of information. Since he is a member of five of the seven groups, this isn’t terribly surprising, but it would have otherwise been unnoticed without this type of inquiry.

ClosenessCentrality varies from betweenness in that we are concerned with path lengths to other actors. These agents who can reach a high number of other actors through short path lengths are able to disseminate information or even exert power more efficiently than agents on the periphery of the network. Let’s run our function on the network again and look at ClosenessCentrality to see if Revere still ranks highest.

HighlightCentrality[personPersonGraph,   ClosenessCentrality[personPersonGraph]]

Revere appears ranked the highest, but it is not nearly as dramatic as his betweenness score and, again, John Adams has a low score. Let’s grab the measurements for further analysis.

closeness = ClosenessCentrality[personPersonGraph]; TopTen[closeness, "Closeness"]

As our heat-map coloring of nodes indicates, other colonists are not far behind Revere, though he certainly is the highest ranked. While there are other important people in the network, Revere is clearly the most efficient broker of resources, power or information.

One final measure we can examine is EigenvectorCentrality, which uses a more advanced algorithm and takes into account the centrality of all nodes and an individual actor’s nearness and embeddedness among highly central agents.

HighlightCentrality[personPersonGraph,   EigenvectorCentrality[personPersonGraph]]

There appears to be two top contenders for the highest eigenvector score. Let’s once again calculate the measurements in a table for examination.

eigenvectorCentrality = EigenvectorCentrality[personPersonGraph]; TopTen[eigenvectorCentrality, "Eigenvector Centrality"]

Nathaniel Barber and Revere have nearly identical scores; however, Revere still tops the list. Let’s now take the top five closeness scores and create a network without them in it to see how the cohesiveness of the network might change.

sHoleData =    Select[PaulRevereData, !       MemberQ[{"Paul Revere", "Thomas Chase", "Henry Bass",         "Nathaniel Barber", "Thomas Urann"}, #Name] &];

shcolonistsNames =    StringJoin[Riffle[Reverse[StringSplit[#, "."]], " "]] & /@     Normal@sHoleData[All, "Name"];

shbipartiteAdjacencyMatrix = Boole@Normal[sHoleData[Values, Rest]]; shedges =    ReplacePart[    shbipartiteAdjacencyMatrix.Transpose[      shbipartiteAdjacencyMatrix], {i_, i_} -> 0]; shpersonPersonGraph =   AdjacencyGraph[shcolonistsNames, shedges,    EdgeStyle -> {Opacity[.1]}, GraphLayout -> "RadialDrawing",     VertexLabels -> Placed["Name", Tooltip],     PlotLabel -> "Without Key Colonists"];

GraphicsRow[{shpersonPersonGraph, personPersonGraph},   ImageSize -> Large]

We see quite a dramatic change in the graph on the left with our key players removed, indicating those with the top five closeness scores are fairly essential in joining these seven political organizations together. Joseph Warren appears to be one of only a few people who can act as a bridge between disparate clusters of connections. Essentially, it would be difficult to have information spread freely through the network on the left as opposed the network on the right that includes Paul Revere.

Conclusion

As we have seen, we can use network science in history to uncover or expose misguided preconceptions about a figure’s importance in historical events, based on group membership metadata. Prior to Fischer’s analysis, many thought Revere was just a courier, and not a major figure. However, what I have been able to show is Revere’s importance in bridging disparate political groups. This further reveals that the Revolutionary movement was pluralistic in its aims. The network was ultimately tied together by disdain for the tyranny of King George III, unjust British military actions and policies that led to bloody revolt, not necessarily a top-down directive from political elites.

Beyond history, network science and natural language processing have many applications, such as uncovering otherwise hidden brokers of information, resources and power, i.e. social capital. One can easily imagine how this might be useful for computational marketing or public relations.

How will you use network science to uncover otherwise-hidden insights to revolutionize and disrupt your work or interests?

Special thanks to Wolfram|Alpha data scientist Aaron Enright for helping with this blog post and to Charlie Brummitt for providing the beginnings of this analysis.


Download this post as a Computable Document Format (CDF) file. New to CDF? Get your copy for free with this one-time download.

]]>
0
Andrew Steinacher <![CDATA[Create a Tracker to Analyze Gas Mileage Using Wolfram Tech]]> http://blog.internal.wolfram.com/?p=36907 2017-06-22T16:05:39Z 2017-06-22T15:40:26Z Completed reportPlot 3D animation

When I first started driving in high school, I had to pay for my own gas. Since I was also saving for college, I had to be careful about my spending, so I started manually tracking how much I was paying for gas in a spreadsheet and calculating how much gas I was using. Whenever I filled my tank, I kept the receipts and wrote down how many miles I’d traveled and how many gallons I’d used. Every few weeks, I would manually enter all of this information into the spreadsheet and plot out the costs and the amount of fuel I had used. This process helped me both visualize how much money I was spending on fuel and manage my budget.

Once I got to college, however, I got a more fuel-efficient car and my schedule got a lot busier, so I didn’t have the time to track my fuel consumption like this anymore. Now I work at Wolfram Research and I’m still really busy, but the cool thing is that I can use our company technology to more easily accomplish my automotive assessments.

After completing this easy project using the Wolfram Cloud’s web form and automated reporting capabilities, I don’t have to spend much time at all to keep track of my fuel usage and other information.

Tracking MPG with Web Forms

To start this project, I needed a way to store the data. I’ve found that the Wolfram Data Drop is a convenient way to store and access data for many of my projects.

I created a databin to store the data with just one line of Wolfram Language code:

bin = CreateDatabin["Name" -> "MPG tracking"]

Basic Web Form

Next, I needed to design a web form that I could use to log the data to the Databin. I used FormFunction to set up a basic one to record gallons of fuel used (from filling the tank each time) and trip distance (from reading the car’s onboard computer).

I also added another field for the date and time of the trip, so that I could add data retroactively (e.g. entering data from old receipts).

I used the DateString function to create an approximate time stamp for submitting data:

basicForm = FormFunction[ { {"TripDistance", "Trip distance (mi)"} -> Restricted["Quantity", "Miles"], {"FuelUsed", "Fuel used (gal)"} -> Restricted["Quantity", "Gallons"], {"Timestamp", "Date & Time"} -> "ComputedDateTime", "Input" :> DateString[{"Month", "/", "Day", "/", "Year", " ", "Hour", ":", "Minute"}] |> }, (DatabinAdd[bin, #]; "Data submitted sucessfully!") & ]

This form works in the notebook interface, but it isn’t accessible from anywhere but my Mathematica notebook. If you want it to access it on the web or from a phone, you need to deploy it to the cloud.

Conveniently, you can do this with just one more line of code using CloudDeploy:

coBasic = CloudDeploy[basicForm, "CarHacks/MPGTracker/BasicForm", Permissions -> "Public"]

Wolfram Cloud form

Extended Form: More Data, Better Appearance

If that’s all you wanted to record, you could stop there. After just a few lines of code, the form created will log distance traveled and fuel used, but there’s quite a bit more data that is available while at a gas station.

A typical car’s dashboard shows average speed and odometer readings from the onboard computer. Additionally, most newer cars will report an estimation of the average gas mileage on a per-trip basis, so I designed the following form that makes it easy to test the accuracy of those readings.

I also added a field to record the location by logging the city where I am filling up with the help of Interpreter. I used $GeoLocationCity and CityData to pre-populate this field so I don’t have to type it out each time.

Finally, if you’re saving for college like I was, you’ll want to record the total price too.

All of these data points can be helpful for tracking fuel consumption, efficiency and more.

extendedFormSpec = {    {"EstimatedMPG", "Estimated gas mileage (mi/gal)"} ->      Restricted["Quantity", ("Miles")/("Gallons")],    {"AverageSpeed", "Average speed (mph)"} -> <|      "Interpreter" -> Restricted["Quantity", ("Miles")/("Hours")],      "Required" -> False,      "Help" -> "(optional)"      |>,    {"Odometer", "Odometer (mi)"} -> Restricted["Quantity", "Miles"],    {"TripDistance", "Trip distance (mi)"} ->      Restricted["Quantity", "Miles"],    {"TotalPrice", "Total price ($)"} -> "CurrencyAmount",    {"FuelUsed", "Fuel used (gal)"} ->      Restricted["Quantity", "Gallons"],    "City" -> <|      "Interpreter" -> "City",      "Input" :> CityData[$GeoLocationCity, "FullName"],      "Default" :> $GeoLocationCity,      "Required" -> False,      "Help" -> "(optional)"      |>,    {"Timestamp", "Date & Time"} -> <|      "Interpreter" -> "ComputedDateTime",      "Input" :>        DateString[{"Month", "/", "Day", "/", "Year", " ", "Hour", ":",          "Minute"}]      |>    };

The last thing to consider before deploying the webpage is the appearance. I set up some visual improvements with the help of AppearanceRules, PageTheme, and FormFunction’s "HTMLThemed" result style:

extendedForm = FormFunction[    extendedFormSpec,    (DatabinAdd[bin, #]; "Data submitted sucessfully!") &,    "HTMLThemed",    AppearanceRules -> <|      "Title" -> "My Car's Gas Mileage",      "ItemLayout" -> "Inline"      |>,    PageTheme -> "Red"    ]; coExtended =   CloudDeploy[extendedForm, "CarHacks/MPGTracker/ExtendedForm",    Permissions -> "Public"]

Advanced Wolfram Cloud form

Making It Accessible

Now that I have a working form, I need to be able to access it when I’m at a gas station.

I almost always have my smartphone on me, so I can use URLShorten to make a simpler web address that I can type quickly:

URLShorten[coExtended]

Or I can avoid typing out a URL altogether by making a QR code with BarcodeImage, which I can read with my phone’s camera application:

BarcodeImage[coExtended[[1]], "QR"]

Once I accessed the form on my phone, I added it as a button on my home screen, which makes returning to the form when I’m at a gas station very easy:

Add form to home screen

Visualizing and Analyzing MPG Data

If you’re following along, at this point you can just start logging data by using the form; I personally have been logging this data for my car for over a year now. But what can I do with all of this data?

With the help of more than 5,000 built-in functions, including a wealth of visualization functions, the possibilities are almost limitless.

I started by querying for the data in my car’s databin with Dataset:

binData = Dataset[Databin["me8Q5puO"]];

binData[[-5 ;;]]

With a few lines of code and the built-in entity framework, I can see all of the counties where I’ve traveled over the last year or so using GeoHistogram:

Show[  GeoGraphics[{EdgeForm[Black],     Polygon /@ {Entity[       "AdministrativeDivision", {"Illinois", "UnitedStates"}],       Entity["AdministrativeDivision", {"Indiana", "UnitedStates"}],       Entity["AdministrativeDivision", {"Wisconsin",         "UnitedStates"}]}}],  GeoHistogram[binData[[All, "City"]], "AdministrativeDivision2"]  ]

I can also see the gas mileage over the course of the past year with TimeSeries:

timeSeries = TimeSeries[Databin["me8Q5puO"]

DateListPlot[timeSeries["TripDistance"]/timeSeries["FuelUsed"],   PlotTheme -> "Detailed", FrameLabel -> {None, "mi/gal"},   PlotLabel -> "Gas Mileage", PlotRange -> Full, Mesh -> Full]

Analyzing Gas Mileage Factors

I often wonder what I can do to improve my gas mileage. I know that there are many factors at play here: driving habits, highway/city driving, the weather—just to name a few. With the Wolfram Language, I can see the effects of some of these on my car’s gas mileage.

I can start by looking at my average speed to compare the effects of highway and city driving and compute the correlation:

mpgVsSpeed =    Select[binData[[All, {"AverageSpeed", "EstimatedMPG"}]],     FreeQ[_Missing]]; ListPlot[mpgVsSpeed, PlotTheme -> "Detailed",   FrameLabel -> {Quantity[None, "Miles"/"Hours"],     Quantity[None, "Miles"/"Gallons"]},   PlotLabel -> "MPG vs Average speed"]

Correlation @@ Transpose[mpgVsSpeed]

It’s pretty clear from the plot that at higher average speeds, gas mileage is higher, but it does appear to eventually level off and somewhat decrease. This makes sense because although a higher average speed indicates less city driving (less stop-and-go traffic), it does require burning more fuel to maintain a higher speed. For example, on the interstate, the engine might be running above its optimal RPM, there will be more wind resistance, etc.

With the help of WeatherData, I can also see if there is a correlation with gas mileage and temperature. I can compute the mean temperature for each trip by taking the mean temperatures of each day between the times that I filled up:

binDataWithTemperature =    Dataset@BlockMap[(Append[Last[#],         "Temperature" ->          Mean[WeatherData[Last[#]["City"], "MeanTemperature",            Append[#[[All, "Timestamp"]], "Day"],            "NonMetricValue"]]]) &, Normal[binData], 2, 1];

mpgVsTemp =    binDataWithTemperature[[All, {"Temperature", "EstimatedMPG"}]]; ListPlot[mpgVsTemp, PlotTheme -> "Detailed",   FrameLabel -> {Quantity[None, "DegreesFahrenheit"],     Quantity[None, "Miles"/"Gallons"]},   PlotLabel -> "MPG vs Mean Temperature"]

The correlation is weaker, but there is a relationship:

Correlation @@ Transpose[mpgVsTemp]

I can also visualize both correlations for the average speed and temperature in 3D space by using miles per gallon as the “height”:

ListPointPlot3D[  Values@Select[    binDataWithTemperature[[     All, {"Temperature", "AverageSpeed", "EstimatedMPG"}]],     FreeQ[_Missing]],  PlotStyle -> PointSize[Large], AxesLabel -> {"Temp", "Speed", "MPG"},   Filling -> Axis, PlotTheme -> "Detailed",   PlotRange -> {All, All, {15, 40}}]

Plot 3D animation 2

It’s also clear from this plot that gas mileage is positively correlated with both temperature and average speed.

Automated Reporting

Now that I have code to visualize and analyze the data, I need some way to automate this process when I’m away from my computer. For example, I can set up a template notebook that can generate reports in the cloud.

To do this, you can use CreateNotebook["Template"] or File > New > Template Notebook
(File > New > Template in the cloud).

After following John Fultz’s steps in his presentation to mimic the TimeSeries plot above, I created a simple report template here:

Report template

I can test the report generation locally by using GenerateDocument (or with the Generate button in the template notebook):

SetDirectory[NotebookDirectory[]]; GenerateDocument["CarHacks1_BasicTemplate.nb", <|   "BinID" -> "me8Q5puO"|>, "CarHacks1_BasicReport.nb"]

Gas mileage report

From here, I can generate a report every time I submit the form by adding this code to the form’s action. But first I need to upload the template notebook to the cloud with CopyFile (alternatively, you can upload it via the web interface):

SetDirectory[NotebookDirectory[]]; CopyFile["CarHacks1_BasicTemplate.nb",   CloudObject["CarHacks/MPGTracker/BasicTemplate"]] SetOptions[%, Permissions -> "Public"];

Now I can update the form to generate the report, and then use HTTPRedirect to open the report as soon as it is finished:

automatedBasicReportForm = FormFunction[    extendedFormSpec,    With[      {       binID = "me8Q5puO",       templatePath = "CarHacks/MPGTracker/BasicTemplate",       reportOutputPath = "CarHacks/MPGTracker/BasicReport_Latest.nb"       },      DatabinAdd[Databin[binID], #];      GenerateDocument[templatePath, <|"BinID" -> binID|>,        reportOutputPath];      HTTPRedirect[CloudObject[reportOutputPath]]      ] &,    "HTMLThemed",    AppearanceRules -> <|      "Title" -> "My Car's Gas Mileage",      "ItemLayout" -> "Inline"      |>,    PageTheme -> "Red"    ]; CloudDeploy[automatedBasicReportForm, \ "CarHacks/MPGTracker/AutomatedBasicReportForm",   Permissions -> "Public"]

That is a basic report. Of course, it’s easy to add more to the template, which I’ve done here, incorporating some of the plots I created before, as well as a few more. Again, I can generate the advanced report to test the template:

SetDirectory[NotebookDirectory[]]; GenerateDocument["CarHacks1_AdvancedTemplate.nb", <|   "BinID" -> "me8Q5puO"|>, "CarHacks1_AdvancedReport.nb"]

Travel report

Seeing that it works, I can upload the template to the cloud:

automatedAdvancedReportForm = FormFunction[    extendedFormSpec,    With[      {       binID = "me8Q5puO",       templatePath = "CarHacks/MPGTracker/AdvancedTemplate",       reportOutputPath = "CarHacks/MPGTracker/AdvancedReport_Latest.nb"       },      DatabinAdd[Databin[binID], #];      GenerateDocument[templatePath, <|"BinID" -> binID|>,        reportOutputPath];      HTTPRedirect[CloudObject[reportOutputPath]]      ] &,    "HTMLThemed",    AppearanceRules -> <|      "Title" -> "My Car's Gas Mileage",      "ItemLayout" -> "Inline"      |>,    PageTheme -> "Red"    ]; CloudDeploy[automatedAdvancedReportForm, \ "CarHacks/MPGTracker/AutomatedReportForm", Permissions -> "Public"]

Lastly, I need to update the form to use the new template and then deploy it:

automatedAdvancedReportForm = FormFunction[    extendedFormSpec,    With[      {       binID = "me8Q5puO",       templatePath = "CarHacks/MPGTracker/AdvancedTemplate",       reportOutputPath = "CarHacks/MPGTracker/AdvancedReport_Latest.nb"       },      DatabinAdd[Databin[binID], #];      GenerateDocument[templatePath, <|"BinID" -> binID|>,        reportOutputPath];      HTTPRedirect[CloudObject[reportOutputPath]]      ] &,    "HTMLThemed",    AppearanceRules -> <|      "Title" -> "My Car's Gas Mileage",      "ItemLayout" -> "Inline"      |>,    PageTheme -> "Red"    ]; CloudDeploy[automatedAdvancedReportForm, \ "CarHacks/MPGTracker/AutomatedReportForm", Permissions -> "Public"]

With this setup, I can always access the latest report at the URL the form redirects me to, so I find it handy to also keep it on my phone’s home screen next to the button for the form:

Generate report

Conclusion

Now you can see how simple it is to use the Wolfram Language to collect and analyze data from your vehicle. I started with a web form and a databin to collect and store information. Then, for convenience, I worked on accessing these through my smartphone. In order to analyze the data, I created visualizations with relevant variables. Finally, I automated the process so that my data collection will generate updated reports as I add new data. Altogether, this is a vast improvement over the manual spreadsheet method that I used when I was in high school.

Now that you see how quick and easy it is to set this up, give it a try yourself! Factor in other variables or try different visualizations, and maybe you can find other correlations. There’s a lot you can do with just a little Wolfram Language code!


Download this post as a Computable Document Format (CDF) file. New to CDF? Get your copy for free with this one-time download.

]]>
1
Michael Gammon <![CDATA[Wolfram Community Highlights: Shadow Mapping, Pairs Trading, the Puzzled Ant and More]]> http://blog.internal.wolfram.com/?p=36823 2017-06-16T16:57:03Z 2017-06-16T16:57:03Z Animation of houseGlobal terrorismFlight paths

Wolfram Community recently surpassed 15,000 members! And our Community members continue to impress us. Here are some recent highlights from the many outstanding Community posts.

BVH Accelerated 3D Shadow Mapping, Benjamin Goodman

House solar map
        Shade data converted to solar map

In a tour de force of computational narrative and a fusion of various Wolfram Language domains, Benjamin Goodman designs a shadow mapping algorithm. It’s a process of applying shadows to a computer graphic. Goodman optimized shadow mapping via space partitioning and a hierarchy of bounding volumes stored as a graph, forming a bounding volume hierarchy.



Pairs Trading with Copulas, Jonathan Kinlay

Copula distribution xy

Copula distribution

Jonathan Kinlay, the head of quantitative trading at Systematic Strategies LLC in New York, shows how copula models can be applied in pairs trading and statistical arbitrage strategies. The idea comes from when copulas began to be widely adopted in financial engineering, risk management and credit derivatives modeling, but it remains relatively underexplored compared to more traditional techniques in this field.



The Global Terrorism Database (GTD), Marco Thiel

Global terrorism

Marco Thiel broke a Wolfram Community record in April when he contributed four featured posts in just three days! He utilized data from the Global Terrorism Database (GTD), an open-source database including information on terrorist events around the world, starting from 1970. It includes systematic data on domestic as well as transnational and international terrorist events, amounting to more than 150,000 cases. Marco analyzes weapon types, geo distribution of attacks and casualties, and temporal and demographical behavior.



Flight Data and Trajectories of Aeroplanes, Marco Thiel

Flight data

Thiel utilizes the large amounts of data becoming ever more available. Often, however, these datasets are very valuable and difficult to access. Thiel shows how to use air traffic data to generate visualizations of three-dimensional flight paths on the globe and access flight positions and altitudes, call signs, types of planes, origins, destinations and much more.



Analysing “All” of the World’s News—Database of Everything, Marco Thiel

Network of actors

In another clever data collection/analysis project, Thiel works with “the largest, most comprehensive, and highest resolution open database of human society ever created,” according to the description provided by GDELT (Global Database of Events, Language, and Tone). Since 2015, this organization has acquired about three-quarters of a trillion emotional snapshots and more than 1.5 billion location references. Thiel performs some basic analysis and builds supporting visualizations.



How-to-Guide: External GPU on OSX—How to Use CUDA on Your Mac, Marco Thiel

GPU hardware

Thiel discusses the neural network and machine learning framework that has become one of the key features of the latest releases of the Wolfram Language. Training neural networks can be very time-consuming, and the Wolfram Language offers an incredibly easy way to use a GPU to train networks and also do numerous other interesting computations. This post explains how to use powerful external GPU units for Wolfram Language computing on your Mac.



Creative Routines Charts, Patrick Scheibe

Composers' work time

People are often interested in how creative or successful individuals manage their time, and when in their daily schedules they do what they are famous for. Patrick Scheibe describes how to build and personalize “creative routines” visualizations.



QR Code in Shopping Cart Handle, Patrick Scheibe

QR code texture projection

Scheibe also brought to Wolfram Community his famous article “QR Code in Shopping Cart Handle.” It explains the image processing algorithm for reading QR code labels when they are deformed by attachment to physical objects such as shopping carts and product packages.



Calculating NMR-Spectra with Wolfram Language, Hans Dolhaine

Hans Dolhaine, a chemist from Germany, writes a detailed walk-through calculating nuclear magnetic resonance spectra with the Wolfram Language. This is a useful educational tool for graduate physics and chemistry classes. Please feel free to share it in your interactions with students and educators.



Computational Introduction to Logarithms, Bill Gosper

Logorithm animation

Another excellent resource for educators is this elementary introduction to logarithms by means of computational exploration with the Wolfram Language. The Community contributor is renowned mathematician and programmer Bill Gosper. His article is highly instructive and accessible to a younger generation, and it contains beautiful animated illustrations that serve as outstanding educational material.



Using Recursion and FindInstance to Solve Sudoku and The Puzzled Ant and Particle Filter, Ali Hashmi

The Puzzled Ant problem

Finally, Ali Hashmi uses the recursion technique coupled with heuristics to solve a sudoku puzzle and also explains the connection between the puzzled ant problem and particle filters in computer vision.



If you haven’t yet signed up to be a member of Wolfram Community, don’t hesitate! You can join in on these discussions, post your own work in groups of your interest, and browse the complete list of Staff Picks.

]]>
0
Keiko Hirayama <![CDATA[Brain, Neurons, Cognition: Computational Neuroscience]]> http://blog.internal.wolfram.com/?p=36699 2017-06-06T15:43:59Z 2017-06-06T15:43:59Z Brain image and brain flow graph

As the next phase of Wolfram Research’s endeavor to make biology computable, we are happy to announce the recent release of neuroscience-related content.

The most central part of the human nervous system is the brain. It contains roughly 100 billion neurons that act together to process information, subdivided functionally and structurally into areas specialized for certain tasks. The brain’s anatomy, the characteristics of neurons and cognitive maps are used to represent some key aspects of the functional organization and processing abilities of our nervous system. Our new neuroscience content will give you a sneak peek into the amazing world of neuroscience with some facts about brains, neurons and cognition.

Brain Anatomy and Network

A primal part of the brain, the amygdala, is the well-studied cognitive area responsible for the emotional process circuitry and has active roles in emotional state, memory, face recognition and decision making. The amygdala is located near the brainstem, close to the center of the brain and, as its name suggests, is shaped like an almond:

light = Join[{{"Ambient", Black}}, Table[{"Directional", Hue[.58, .5, 1], ImageScaled[{Sin[x], Cos[x], -.5}], Pi/2}, {x, 0, 2 Pi - 2 Pi/8, 2 Pi/8}]]; AnatomyPlot3D[{AnatomyForm[ Directive[Specularity[White, 30], Hue[.58, 0, 1, .12], Lighting -> light], Entity["AnatomicalStructure", "Amygdala"] -> Glow[Red]|>], Entity["AnatomicalStructure", "Amygdala"], Entity["AnatomicalStructure", "Brain"], Entity["AnatomicalStructure", "Skin"]}, Background -> Hue[.58, 1, .3], ViewPoint -> Right, SphericalRegion -> True, PlotRange -> {{Automatic, 0}, Automatic, {1400, Automatic}}, BaseStyle -> {RenderingOptions -> {"DepthPeelingLayers" -> 20}}, ImageSize -> 500]

Outgoing connections from the amygdala can be found with the "NeuronalOutput" property:

output = Entity["AnatomicalStructure", "Amygdala"]["NeuronalOutput"]

Here we see a visualization of the output connectivity of the amygdala in two layers:

ag = NestGraph[ Cases[EntityValue[#, EntityProperty["AnatomicalStructure", "NeuronalOutput"]], _Entity] &, Entity["AnatomicalStructure", "Amygdala"], 2, ImageSize -> 700, VertexSize -> 2, EdgeStyle -> LightGray, GraphStyle -> "SmallNetwork", GraphLayout -> "RadialEmbedding", VertexLabels -> Placed["Name", Tooltip], GraphHighlight -> Entity["AnatomicalStructure", "Amygdala"]]

Just as in a simple network, we could do additional computations on other networks. Like many other biological systems, our nervous system is hardwired to receive positive and negative feedback. Feedback is one of the key aspects of the brain’s information processing; it allows the augmentation or decrease of the efficacy of transmission, as well as fine-tuning for the resulting outputs.

Find the loop and highlight in the above graph:

cycle = FindCycle[ag, Infinity, All];

HighlightGraph[ag, cycle]

Or find the specific circuit that comprises the combination amygdala-prefrontal cortex. The prefrontal cortex has the primary role in decision making, and therefore the amygdala-prefrontal cortex connectivity plays an essential role in modulating responses to emotional experiences:

circuit =   FindCycle[{ag, Entity["AnatomicalStructure", "PrefrontalCortex"]}]

HighlightGraph[ag, Join[circuit, VertexList[ Graph[First@circuit]], {Entity["AnatomicalStructure", "AnteriorCingulateGyrus"]}], GraphHighlightStyle -> "Thick"]

We can also identify the minimum-cost flow between the amygdala and the spinal cord. The spinal cord processes signals from the brain and transmits them to other parts of the body to excite motor response:

f = FindMinimumCostFlow[ag, Entity["AnatomicalStructure", "Amygdala"],    Entity["AnatomicalStructure", "SpinalCord"], "OptimumFlowData"]

f["FlowGraph"]

It is also noteworthy that, in addition to the brain’s connectivity in the central nervous system, we have peripheral innervation integrated in our AnatomyData function. The motor commands from the spinal cord eventually reach the periphery.

Find nerves that innervate the left hand:

nerve = EntityValue[Entity["AnatomicalStructure", "LeftHand"],    "NerveSupply"]

And visualize them in 3D with the AnatomyPlot3D function:

AnatomyPlot3D[{nerve, Opacity[.3], Entity["AnatomicalStructure", "SkeletonOfLeftFreeUpperLimb"], Entity["AnatomicalStructure", "LeftHand"]}] /. Annotation[x_, y_] :> Tooltip[x, "Name" /. y]

Neuron Characteristics

We have looked at macroscopic pictures of our nervous system so far. Now let’s look at the brain’s functional unit, the neuron. Of course, we cannot characterize all the billions of neurons, but key features of a few hundred types of neurons are very similar across various mammalian species; these will be considered in detail.

A variety of properties are available for the "Neuron" entity type to describe physical, electrophysiological and spatial characteristics of individual types of neurons:

Entity["Neuron"]["SampleEntities"]

Entity["Neuron"]["Properties"]

We can get information on the types of neurons found in a particular brain region. For example, we can get a listing of neurons in the hippocampus, which is associated with emotional states, conversion of short-term to long-term memories and forming spatial memory:

EntityValue[Entity["AnatomicalStructure", "Hippocampus"], "Neurons"]

Collecting further details, list the set of neurons whose axons arborize at the CA1 alveus area of the hippocampus:

EntityValue[ EntityClass[ "Neuron", {"LocationOfDistantAxonArborization" -> Entity["AnatomicalStructure", "CA1Alveus"]}], "Entities"]

Neurons transmit electrical signals to communicate with one another. Physical characteristics and patterns of their spikes, known as action potentials, differ across different neuron types.

We can obtain experimentally measured electrophysiological properties of hippocampus CA1 pyramidal cells:

Dataset@EntityValue[Entity["Neuron", "HippocampusCA1PyramidalCell"],     "NonMissingPropertyAssociation"][[;; 10]]

Here we can visually recognize how spike characteristics vary across different neuron types:

Histogram[ EntityValue[EntityList["Neuron"], EntityProperty["Neuron", "AverageSpikeAmplitude"], "NonMissingEntityAssociation"], 20, PlotLabel -> "Spike amplitude (mV)"]

A single neuron’s spike propagation can be simulated with the well-known Hodgkin and Huxley model (A. L. Hodgkin and A. F. Huxley, 1952) based on four differential equations involving voltages and currents. Also, there are biologically realistic computational models accommodating Hodgkin and Huxley’s concepts developed to simulate ensembles of spikes in a population of neurons (E. M. Izhikevich, 2004). We can better understand how neurons excite/suppress one another to transmit information by modeling the neurons’ electrical spikes and comparing their patterns of activities with experimentally measured ones:

Spiking neurons

Cognitive Map

After looking at microscopic features in our brain, let us finally explore the brain’s macro-scale executive function. Thanks to recent advances in imaging techniques to visualize brain activity in various cognitive states, we can map out cortical areas that are associated with specific cognitive processes. Brain areas associated with specific functions such as memory, decision making, language, emotional state, visual perception, etc. are well characterized with the appropriate activity-based fMRI analysis.

Using the EntityValue query with the AnatomicalFunctionalConcept entity type, we can find more information on hierarchically categorized brain activities:

Dataset@EntityValue[   Entity["AnatomicalFunctionalConcept",     "DecisionMaking"], {EntityProperty["AnatomicalFunctionalConcept",      "Definition"],     EntityProperty["AnatomicalFunctionalConcept",      "AssociatedAnatomicalSites"],     EntityProperty["AnatomicalFunctionalConcept", "BroaderConcepts"],     EntityProperty["AnatomicalFunctionalConcept", "NarrowerConcepts"],     EntityProperty["AnatomicalFunctionalConcept", "SupersetConcepts"],     EntityProperty["AnatomicalFunctionalConcept", "SubsetConcepts"],     EntityProperty["AnatomicalFunctionalConcept",      "StandardExperimentalTasks"]}, "PropertyAssociation"]

Here we can look up the categories of functions associated with each cerebral lobe and create a simple cortical map:

Cerebral lobe cortical map

We are not limited to the abstract representation of cortical maps; fMRI-based statistical maps of brain activity are also available.

Let’s look at how we perceive the visual world. A key aspect of visual perception is the subprocess (concept) of cognition as our brain categorizes our visually perceived faces, places, words, numbers, etc. with distinctive patterns of activity. The following graph illustrates how these concepts are hierarchically organized. Some areas of brain activation are highlighted (brain images are seen from the rear):

g = NestGraph[Cases[EntityValue[#, "NarrowerConcepts"], _Entity] &, Entity["AnatomicalFunctionalConcept", "VisualPerception"], 2]; act = EntityValue[VertexList[g], "BrainGraphicBack"]; Graph[EdgeList[g], VertexLabels -> MapThread[#1 -> If[MemberQ[{Entity["AnatomicalFunctionalConcept", "VisualPerception"], Entity["AnatomicalFunctionalConcept", "VisualFaceRecognition"], Entity["AnatomicalFunctionalConcept", "VisualNumberRecognition"], Entity["AnatomicalFunctionalConcept", "VisualPlaceRecognition"]}, #1], Placed[Grid[{{Rasterize[#1]}, {#2}}], Center], Placed[Rasterize[#1, ImageSize -> {Automatic, 26}], Center]] &, {VertexList[g], act}], VertexSize -> Small, GraphLayout -> "RadialEmbedding", PlotTheme -> "Monochrome", AspectRatio -> 1.2, ImageSize -> 900, ImagePadding -> 60] // Image

Brain graphic

OK, let’s look further. Visually perceived words, sentences, faces, etc., in turn, affect “language” and “emotion”:

ImageCollage@  EntityValue[   Entity["AnatomicalFunctionalConcept",     "Language"], {EntityProperty["AnatomicalFunctionalConcept",      "BrainGraphicFront"],     EntityProperty["AnatomicalFunctionalConcept",      "BrainImageCoronalSlices"],     EntityProperty["AnatomicalFunctionalConcept",      "BrainImageHorizontalSlices"],     EntityProperty["AnatomicalFunctionalConcept",      "BrainImageSagittalSlices"]}]

ImageCollage@  EntityValue[   Entity["AnatomicalFunctionalConcept",     "Emotion"], {EntityProperty["AnatomicalFunctionalConcept",      "BrainGraphicFront"],     EntityProperty["AnatomicalFunctionalConcept",      "BrainImageCoronalSlices"],     EntityProperty["AnatomicalFunctionalConcept",      "BrainImageHorizontalSlices"],     EntityProperty["AnatomicalFunctionalConcept",      "BrainImageSagittalSlices"]}]

ImageCollage@  EntityValue[   Entity["AnatomicalFunctionalConcept",     "Emotion"], {EntityProperty["AnatomicalFunctionalConcept",      "BrainGraphicFront"],     EntityProperty["AnatomicalFunctionalConcept",      "BrainImageCoronalSlices"],     EntityProperty["AnatomicalFunctionalConcept",      "BrainImageHorizontalSlices"],     EntityProperty["AnatomicalFunctionalConcept",      "BrainImageSagittalSlices"]}]

ImageCollage@  EntityValue[   Entity["AnatomicalFunctionalConcept",     "Emotion"], {EntityProperty["AnatomicalFunctionalConcept",      "BrainGraphicFront"],     EntityProperty["AnatomicalFunctionalConcept",      "BrainImageCoronalSlices"],     EntityProperty["AnatomicalFunctionalConcept",      "BrainImageHorizontalSlices"],     EntityProperty["AnatomicalFunctionalConcept",      "BrainImageSagittalSlices"]}]

We can confirm that the amygdala (remember, the left and right amygdalae found near the center of the brain) is actively involved in emotions. If you want to learn more about these individual models, they are also available in 3D polygon data and ready to be aligned to our 3D brain model in AnatomyData for further computation.

Here is the brain activation area 3D graphic associated with emotion:

em = Entity["AnatomicalFunctionalConcept", "Emotion"][   "ActivationAreas3DGraphic"]

We can combine that graphic together with the brain model for visual comparison (the amygdala is highlighted in red; the right cerebral hemisphere is shown here for demonstration):

AnatomyPlot3D[{em /. Opacity[_] :> Opacity[.4], {Directive[Specularity[White, 30], Hue[.58, 0, 1, .12], Lighting -> light], Entity["AnatomicalStructure", "RightCerebralHemisphere"]}, {Glow[ Red], Entity["AnatomicalStructure", "RightAmygdala"]}}, PlotRange -> Entity["AnatomicalStructure", "RightCerebralHemisphere"] , Background -> Hue[.58, 1, .3], ViewPoint -> Right, SphericalRegion -> True, BaseStyle -> {RenderingOptions -> {"DepthPeelingLayers" -> 20}}]

It’s fascinating to learn how our brain is organized and how it coordinates the processes in our nervous system. As we know, there is still a lot to be learned about human cognition, and exciting discoveries are being made every day. As we gain additional insights, we continue to expand our knowledgebase to attain a better and deeper understanding of the human nervous system.

Stay tuned for more neuroscience content to come!


Download this post as a Computable Document Format (CDF) file. New to CDF? Get your copy for free with this one-time download.

]]>
1