WOLFRAM

The Wolfram Technology Conference 2018 Livecoding Championship: A Recap

For the third year in a row, the annual Wolfram Technology Conference played host to a new kind of esport—the Livecoding Championship. Expert programmers competed to solve challenges with the Wolfram Language, with the goal of winning the championship tournament belt and exclusive bragging rights.

Wolfie with tournament belt

This year I had the honor of composing the competition questions, in addition to serving as live commentator alongside trusty co-commentator (and Wolfram’s lead communications strategist) Swede White. You can view the entire recorded livestream of the event here—popcorn not included.

Commentators

Right: Swede White and the author commentating. Left: Stephen Wolfram and the author.

This year’s competition started with a brief introduction by competition room emcee (and Wolfram’s director of outreach and communications) Danielle Rommel and Wolfram Research founder and CEO Stephen Wolfram. Stephen discussed his own history with (non-competitive) livecoding and opined on the Wolfram Language’s unique advantages for livecoding. He concluded with some advice for the contestants: “Read the question!”

Question 1: A New Kind of Science Bibliography Titles


Use the Wolfram Data Repository to obtain the titles of the books in Stephen Wolfram’s library that were used during the creation of A New Kind of Science. Return the longest title from this list as a string.


After a short delay, the contestants started work on the first question, which happened to relate to Stephen Wolfram’s 2002 book A New Kind of Science. Stephen dropped by the commentators’ table to offer his input on the question and its interesting answer, an obscure tome from 1692 with a 334-character title (Miscellaneous Discourses Concerning the Dissolution and Changes of the World Wherein the Primitive Chaos and Creation, the General Deluge, Fountains, Formed Stones, Sea-Shells found in the Earth, Subterraneous Trees, Mountains, Earthquakes, Vulcanoes, the Universal Conflagration and Future State, are Largely Discussed and Examined):

Miscellaneous Discourses

Question 2: Countries Closest to a Disk


Find the country whose Polygon is closest to a disk. More specifically, find the country that minimizes . Work in the equirectangular projection. Return a Country entity. Do not use GeoVariant.


The second question turned out to be quite challenging for our contestants—knowledge of a certain function, DiscretizeGraphics, was essential to solving the question, and many contestants had to spend precious time tracking down this function in the Wolfram Language documentation.

Countries

Question 3: Astronaut Timelines


Plot the dates of birth of the six astronauts/cosmonauts who were crewmembers on the most manned space missions. Return a TimelinePlot (a Graphics object) called with an association of Person entity -> DateObject rules, with all options at their defaults.


The contestants stumbled a bit in interpreting the third question, but they figured it out relatively quickly. However, some technical issues led to some exciting drama as the judges deliberated on who to hand the third-place point to. Stephen made a surprise return to the commentators’ table to talk about astronaut Michael Foale’s unique connection to Wolfram Research and Mathematica. I highly recommend reading Michael’s fascinating keynote address, Navigating on Mir, given at the 10th-anniversary Mathematica Conference.

Astronauts/cosmonauts

Question 4: Centroid in Capital


Only one US state (excluding the District of Columbia) has a geographic centroid located within the polygon of its capital city. Find that state and return it as an AdministrativeDivision entity.


Wolfram Algorithms R&D department researcher José Martín-García joined the commentators’ table for the fourth question. José worked on the geographic computation functionality in the Wolfram Language, and helped explain to our audience some of the technical aspects of this question, such as the mathematical concept of a geometric centroid. Solving this question involved the same DiscretizeGraphics function that tripped up contestants on question 2, but it seems that this time they were prepared, and produced their solutions much more quickly.

Geographic centroid

Question 5: Periodic Pie


Retrieve the material color of each Element in the periodic table, as given by the Color interpreter. Discard any Missing or Colorless values, and return a PieChart Graphics object with a sector for each color, where each sector’s width is proportional to the number of elements with that color, and the sector is styled with the color. Sort the colors into canonical order by count before generating the pie chart, so the sector sizes are in order.


The fifth question was, lengthwise, the most verbose in this year’s competition. For every question, the goal is to provide as much clarity as possible regarding the expected format of the answer (as well as its content), which this question demonstrates well. The last sentence is particularly important, as it specifies that the pie “slices” are expected to be in ascending order by size, which ensures that the pie chart looks the same as the expected result. This aspect took our contestants a few tries to pin down, but they eventually got it.

PieChart

Question 6: A.I.-braham Lincoln


The neural net model NetModel["Wolfram English Character-Level Language Model V1"] predicts the next character in a string of English text. Nest this model 30 times (generating 30 new characters) on the first sentence in the Gettysburg Address, as given by ExampleData and TextSentences. Return a string.


The sixth question makes use of not only the Wolfram Language’s unique symbolic support for neural networks, but also the recently launched Wolfram Neural Net Repository. You can read more about the repository in its introductory blog post.

This particular neural network, the Wolfram English Character-Level Language Model V1, is trained to generate English text by predicting the most probable next character in a string. The results here might be improbable to hear from President Lincoln’s mouth, but they do reflect the fact that part of this model’s training data consists of old news articles!

Lincoln output

Question 7—Actually, Question 10: Eclipses per Year


Of the many total solar eclipses to occur between now and December 31st, 2100, two will happen during the same year. Find that year and return it as an integer.


For the seventh and last question of the night, our judges decided to skip ahead to the tenth question on their list! We hadn’t expected to get to this question in the competition and so hadn’t lined up an expert commentator. But as it turns out, José Martín-García knows a fair bit about eclipses, and he kindly joined the commentators’ table on short notice to briefly explain eclipse cycles and the difference between partial and total solar eclipses. Check out Stephen’s blog post about the August 2017 solar eclipse for an in-depth explanation with excellent visualizations.

Eclipse visualization

(The highlighted regions here show the “partial phase” of each eclipse, which is the region in which the eclipse is visible as a partial eclipse. The Wolfram Language does not yet have information on the total phases of these eclipses.)

The Results

At the end of the competition, the third-place contestant, going under the alias “AieAieAie,” was unmasked as Etienne Bernard, lead architect in Wolfram’s Advanced Research Group (which is the group responsible for the machine learning functionality of the Wolfram Language, among other wonderful features).

Etienne Bernard and Carlo Barbieri
Etienne Bernard and Carlo Barbieri

The contestant going by the name “Total Annihilation” (Carlo Barbieri, consultant in the Advanced Research Group) and the 2015 Wolfram Innovator Award recipient Philip Maymin tied for second place, and both won limited-edition Tech CEO mini-figures!

Left: Philip Maymin. Right: Tech CEO mini-figure.
Left: Philip Maymin. Right: Tech CEO mini-figure.

The first-place title of champion and the championship belt (as well as a Tech CEO mini-figure) went to the contestant going as “RiemannXi,” Chip Hurst!

Chip Hurst

An Astronomical Inconsistency

I wanted to specifically address potential confusion regarding question 3, Astronaut Timelines. This is the text of the question:


Plot the dates of birth of the six astronauts/cosmonauts who were crewmembers on the most manned space missions. Return a TimelinePlot (a Graphics object) called with an association of Person entity -> DateObject rules, with all options at their defaults.


Highly skilled programmer Philip Maymin was one of our contestants this year, and he was dissatisfied with the outcome of this round. Here’s a solution to the question that produces the expected “correct” result:

counts=Counts@Flatten
&#10005

counts = Counts@Flatten[EntityValue["MannedSpaceMission", "Crew"]];
TimelinePlot[
 EntityValue[Keys@TakeLargest[counts, 6], 
  EntityProperty["Person", "BirthDate"], "EntityAssociation"]]

And here’s Philip’s solution:

TimelinePlot@EntityValue
&#10005

TimelinePlot@
 EntityValue[
  Keys[Reverse[
     SortBy[EntityValue[
       Flatten@Keys@
         Flatten[EntityClass["MannedSpaceMission", 
            "MannedSpaceMission"][
           EntityProperty["MannedSpaceMission", "PrimaryCrew"]]], 
       "MannedSpaceMissions", "EntityAssociation"], Length]][[;; 6]]],
   "BirthDate", "EntityAssociation"]

Note the slightly different approaches—the first solution gets the "Crew" property (a list) of every "MannedSpaceMission" entity, flattens the resulting list and counts the occurrences of each "Person" entity within that, while Philip’s solution takes the aforementioned list and checks the length of the "MannedSpaceMission" property for each "Person" entity in it. These are both perfectly valid techniques (although Philip’s didn’t even occur to me as a possibility when I wrote this question), and in theory should both produce the exact same result, as they’re both accessing the same conceptual information, just through slightly different means. But they don’t, and it turns out Philip’s result is actually the correct one! Why is this?

The primary reason for this discontinuity boils down to a bug in the Wolfram Knowledgebase representation of the STS-27 mission of Space Shuttle Atlantis. Let’s look at the "Crew" property for STS-27:

Entity
&#10005

Entity["MannedSpaceMission", "STS27"]["Crew"]

Well, that’s clearly wrong! There’s a "Person" entity for Jerry Lynn Ross in there, but it doesn’t match the entity that canonically represents him within the Knowledgebase. I’ve reported this inaccuracy, along with a few other issues, to our data curation team, and I expect it will be addressed soon. Thanks to Philip for bringing this to our attention!

Conclusion

The inaugural Wolfram Language Livecoding Competition took place at the 2016 Wolfram Technology Conference, and the following year’s competition in 2017 was the first to be livestreamed. We held something of a test-run for this year’s competition at the Wolfram Summer School in June, for which I also composed questions and piloted an experimental second, concurrent livestream for behind-the-scenes commentary. At this year’s Technology Conference we merged these two streams into one, physically separating the contestants from the commentators to avoid “contaminating” the contestant pool with our commentary. We also debuted a new semiautomated grading system, which eased the judges’ workload considerably. Each time we’ve made some mistakes, but we’re slowly improving, and I think we’ve finally hit upon a format that’s both technically feasible and entertaining for a live audience. We’re all looking forward to the next competition!

Comments

Join the discussion

!Please enter your comment (at least 5 characters).

!Please enter your name.

!Please enter a valid email address.

2 comments

  1. great post, Jesse. fascinating detail and narrative
    Congratulations, Chip and the others!
    Wish I was there this year to see it. See you next year!

    Reply
  2. Competitions questions look like a lot of fun! Nice job, Jesse!

    Reply