Wolfram Computation Meets Knowledge

Duking It Out in the Wolfram Language: A Breakdown of the 2019 Livecoding Championship

Two weeks ago, I had the pleasure of returning as a commentator for the fourth annual Livecoding Championship, a special event held during the 2019 Wolfram Technology Conference. We had such an incredible turnout this year, with 27 total participants and 14 earning at least one point! Conference attendees and Wolfram staff competed for the title of Livecoding Champion, with seven questions (plus one tiebreaker!) challenging their speed, agility and knowledge of the Wolfram Language. It was a high-spirited battle for first place, and while I had prepared “answer key” solutions in advance, I always look forward to the creativity and cleverness that competitors demonstrate in their wide range of approaches to each question.

By popular request, in addition to revisiting the questions, I’ll walk you through how competitors reached their solutions and earned their points, as a kind of “study guide” for next year’s aspiring champions. So hold on to your keyboards—we’re going in!

The 2019 Wolfram Livecoding Championship

Follow along with these questions by watching this recorded video of the 2019 Livecoding Championship livestream!

Question 1: Symbol Eponyms

Find the 10 people with the most Wolfram Language symbols named after them (eponyms), and return the most common country of birth among them as a "Country" entity.

An eponym is a “namesake”—the person (or other entity) after which a thing is named. The Wolfram Language contains many eponymic functions and other symbols, particularly in mathematical domains (Fibonacci, EulerGamma, DelaunayMesh, etc.), and this data is accessible from the Wolfram Knowledgebase through the "EponymousPeople" property of "WolframLanguageSymbol" entities. For example:

Fibonacci
&#10005

Entity["WolframLanguageSymbol", "Fibonacci"]["EponymousPeople"]

This returns a list of "Person" entities. The birthplace of a "Person" entity can be queried with the "BirthPlace" property:

%[[1]]["BirthPlace"]
&#10005

%[[1]]["BirthPlace"]

In this case, the birthplace is a "City" entity, the country of which can be found using the "Country" property:

%["Country"]
&#10005

%["Country"]

Since I wrote the questions, I had to solve them too! Here’s my solution to the question, using this approach:

people = Flatten@EntityValue
&#10005

people = Flatten@
   EntityValue["WolframLanguageSymbol", "EponymousPeople"];
commonestPeople = Commonest[people, 10];
birthplaces = EntityValue[commonestPeople, "BirthPlace"];
First@Commonest@EntityValue[birthplaces, "Country"]

The first line collects the eponyms for all "WolframLanguageSymbol" entities, and the second selects the 10 most common people in that list. That’s the first part of the question, “Find the 10 people with the most Wolfram Language symbols named after them.”

Next, we query the cities of birth of each of the 10 people, get the country each city is in and extract the single most common country in this final list—Germany.

As a bonus, we can see who these 10 most eponymous people are, along with the number of symbols each has to their name:

ReverseSort@KeyTake
&#10005

ReverseSort@KeyTake[Counts[people], commonestPeople] // ReverseSort

And the countries of birth of each:

EntityValue[#,"Country"]
&#10005

EntityValue[#, "Country"] & /@ 
  EntityValue[commonestPeople, "BirthPlace", 
   "EntityAssociation"] // Sort

Now let’s see what our contestants came up with!

First Place: Gerli (Solved in 4m 41s)

symbols = WolframLanguageData
&#10005

symbols = WolframLanguageData[];
people = EntityValue[symbols, "EponymousPeople"];
places = EntityValue[
   SortBy[Tally[Flatten[people]], Last][[-10 ;;]][[All, 1]], 
   "BirthPlace"];
First[Last[SortBy[Tally[EntityValue[places, "Country"]], Last]]]

The contestant going by the screen name “Gerli” would prove to be a formidable force throughout the evening. Their solution is structurally quite similar to mine, although some components are slightly different. Gerli used (in effect) EntityValue[WolframLanguageData[],"EponymousPeople"] in place of my EntityValue["WolframLangugeSymbol","EponymousPeople"]—these are essentially equivalent, so choosing one is just a matter of style. They also chose to combine Tally, SortBy[Last] and Part where I used Commonest.

Second Place: <3 Wolfie (Solved in 5m 40s)

Commonest
&#10005

Commonest[#[EntityProperty["Person", "BirthPlace"]]["Country"] & /@ 
   Keys[ReverseSort[
      Counts[Flatten[
        EntityValue["WolframLanguageSymbol", "EponymousPeople"]]]][[;;
       10]]]][[1]]

Contestant “<3 Wolfie” produced a one-line solution using Counts, Part, ReverseSort and Keys to replicate Commonest for the first part of the question, but used Commonest directly for the second part. I should note that the syntax this contestant used to get each person’s country of birth…

commonestPeople
&#10005

(* using the commonestPeople variable from my solution *)
#[
    EntityProperty["Person", "BirthPlace"]][
   "Country"] & /@ commonestPeople

… is (semantically) equivalent to:

EntityValue
&#10005

EntityValue[
 EntityValue[commonestPeople, 
  EntityProperty["Person", "BirthPlace"]], "Country"]

Although the former approach might look somewhat cleaner, it’s generally a good idea to use the listable syntax of EntityValue whenever possible. This is because there’s often an overhead to each EntityValue call, which can make mapping significantly slower—this becomes noticeable when querying properties of a large list of entities:

RepeatedTiming
&#10005

(* using the people variable from my solution, which is a list of \
~600 "Person" entities *)
RepeatedTiming[#["BirthPlace"][
     "Country"] & /@ people;]

RepeatedTiming
&#10005

RepeatedTiming[
 EntityValue[EntityValue[people, "BirthPlace"], "Country"];]

However, as this question only required querying across a short list of 10 people, the performance impact of mapping is negligible.

Third Place: degenerateArtist (Solved in 6m 04s)

ReverseSortBy
&#10005

ReverseSortBy[
   Tally[CityData[#["BirthPlace"], "Country"] & /@ 
     Take[ReverseSortBy[
        Tally[Flatten[
          EntityValue["WolframLanguageSymbol", "EponymousPeople"]]], 
        Last], 10][[All, 1]]], Last] // First // First

“degenerateArtist” continues the pattern of bespoke reimplementations of Commonest by combining Tally, ReverseSortBy[Last], Take and Part—this is quite similar to Gerli’s approach.

“Country” Property vs. GeoIdentify

There’s one subtlety about all four of these solutions that I’d like to point out: they all assume that the "BirthPlace" property of a "Person" entity will be a "City" entity, or at least some entity type that has a "Country" property. Now, while this assumption happens to be correct for the 10 most eponymous people in question, it’s not a guarantee across all "Person" entities—the "BirthPlace" property can be any entity that represents a geographic location, and not all such entities have the "Country" property. For example, the "BirthPlace" of Agner Krarup Erlang (namesake of ErlangB, ErlangC and ErlangDistribution, not to mention the programming language) is a "Country" entity, which does not itself have a "Country" property:

AgnerKrarupErlang
&#10005

Entity["Person", "AgnerKrarupErlang::3kxhf"]["BirthPlace"]

%["Country"]
&#10005

%["Country"]

Instead of querying a fixed property of the "BirthPlace", we can use GeoIdentify. GeoIdentify returns the geographic entity or entities of a specified type that a specified geographic object is contained within. We can see that London is contained within the United Kingdom, and that the country Denmark is considered to be contained within itself:

GeoIdentify
&#10005

GeoIdentify["Country", 
 Entity["City", {"London", "GreaterLondon", "UnitedKingdom"}]]

GeoIdentify
&#10005

GeoIdentify["Country", Entity["Country", "Denmark"]]

You’ll notice that a list is returned, not just a single entity—this is because certain types of objects given as the second argument, like geographic polygons, may intersect multiple countries. In our case, however, we need only look at the first element of the returned list. Here’s my solution modified to use GeoIdentify instead of querying the "Country" property of the birthplace:

people = Flatten@EntityValue
&#10005

people = Flatten@
   EntityValue["WolframLanguageSymbol", "EponymousPeople"];
commonest = Commonest[people, 10];
birthplaces = EntityValue[commonest, "BirthPlace"];
First@Commonest@(GeoIdentify["Country", birthplaces][[All, 1]])

This approach isn’t completely bulletproof, as the "BirthPlace" property can occasionally be a "HistoricalCountry" entity, which as of this writing does not work with GeoIdentify (although this may soon change)—but it does work well for the large majority of cases.

Question 2: Rule 30 Edge Components

Evolve the rule 30 cellular automaton for 2,000 steps, starting from a single 1 centered on a background of 0s. Use the evolution as input to MorphologicalComponents, with CornerNeighbors set to False, and find the number of distinct components that border the right edge of the automaton. Return an integer.

This question refers to the elementary cellular automaton (CA) rule 30, discovered by Stephen Wolfram. The automaton looks like this, starting from a single “1” cell centered on an infinite background of “0” cells, and evolved for 100 steps:

RulePlot[CellularAutomaton[30],{{1},0},100]
&#10005

RulePlot[CellularAutomaton[30], {{1}, 0}, 100]

(RulePlot shows “0” cells as white and “1” cells as black. However, in the world of images, 0 means black and 1 means white. This is the world in which MorphologicalComponents and the rest of the Wolfram Language’s image processing functionality lives, so from here on in I’ll be using that terminology.)

The question involves treating the evolution of the automaton as a grid of pixels constituting an image and applying segmentation analysis, an image processing technique, to pick out blobs of connected white pixels (I’ll explain what “connected” means shortly). This was my solution:

iters = 2000;
&#10005

iters = 2000;
ca = CellularAutomaton[30, {{1}, 0}, iters];
components = MorphologicalComponents[ca, CornerNeighbors -> False];
Length@Union@Diagonal[components, iters]

This evaluates 2,000 steps of the rule using the CellularAutomaton function, which (in this case) returns a two-dimensional matrix of 1s and 0s. The MorphologicalComponents function is often used with images, but it will also accept a raw array of pixels. MorphologicalComponents returns a 2D matrix of integers—all pixels belonging to the same component will be replaced with the same integer. We then use the Diagonal function to get the elements on the right diagonal—the second argument to Diagonal specifies the column to start at, and since the evolution of an elementary CA starting with a single black cell for n steps has width 2n, column n is the center column. Finally, we use Union get a list of unique elements in the diagonal and count these with Length.

To get a more intuitive idea of what’s going on, we can use the Colorize function to create an image where a different color is assigned to each component:

ImageCrop
&#10005

(* looking at just the first 500 steps *)
ImageCrop[
 Colorize[components], 500*2, {Center, Bottom}]

The question is essentially asking us to count the number of unique colors that occur on the right edge of the triangle.

I said previously that I’d explain what “connected” pixels are. Let’s take a closer look at the top of that image:

ImageCrop
&#10005

(* looking at just the first 75 steps *)

ImageCrop[%, 75*2, {Center, Bottom}] // 
 ImageResize[#, Scaled[2], Resampling -> "Nearest"] &

If you look closely at the borders between adjacent components, you might notice that cells of different colors only meet at their corners. That’s the CornerNeighbors False option at work. By default, MorphologicalComponents will treat any white pixel in the eight-cell vicinity (above, below, left, right or diagonal—Moore neighborhood in CA parlance) of a given white pixel as “connected” to that center pixel, while setting CornerNeighbors False restricts that search to the four-cell vicinity (above, below, left or right—von Neumann neighborhood). See how with the default CornerNeighbors True all five cells end up in the same component, while with CornerNeighbors False they’re each in separate components:

i = Image
&#10005

i = Image[{{1, 0, 1}, {0, 1, 0}, {1, 0, 1}}];
Colorize /@ {
  MorphologicalComponents[i],
  MorphologicalComponents[i, CornerNeighbors -> False]
  }

By definition, every white cell in the evolution of an elementary CA must have at least one white cell in its eight-cell neighborhood, so leaving the default setting on MorphologicalComponents would yield the entire structure of the automaton as one giant component—not a very interesting result. By restricting the component search to the four-cell neighborhood, we give the internal dynamics of the automaton a chance to show through into the results.

First Place: Jon McLoone (Solved in 4m 28s)

data = MorphologicalComponents
&#10005

data = MorphologicalComponents[
   Image[CellularAutomaton[30, {{1}, 0}, 2000]], 
   CornerNeighbors -> False];
c = Last[DeleteCases[#, 0]] & /@ data;
Length[Union[c]]

Jon’s solution is almost identical to mine—the only substantial difference is a different approach to getting that rightmost diagonal. Jon takes advantage of the fact that the pixels of the right diagonal will always be white, and that MorphologicalComponents will always identify pixels of the black background as component 0. This lets Jon simply delete all 0s in the component matrix, after which the last element of each row will be the right diagonal. We can visualize this by replacing 0s with empty strings:

data = MorphologicalComponents
&#10005

(* just the first 10 steps *)

data = MorphologicalComponents[
   Image[CellularAutomaton[30, {{1}, 0}, 10]], 
   CornerNeighbors -> False];
data /. 0 -> "" // MatrixForm

Second Place: Gerli (Solved in 5m 12s)

automaton = CellularAutomaton
&#10005

automaton = CellularAutomaton[30, {{1}, 0}, 2000];
components = 
  MorphologicalComponents[automaton, CornerNeighbors -> False];
Length[Union[Diagonal[components[[All, 2001 ;;]]]]]

Gerli’s solution is even closer to mine—they used the Diagonal function, but instead of supplying a second argument to specify the column to start the diagonal at, they “cropped” the matrix to begin with the center column. We can visualize the resulting “cropped” matrix:

CellularAutomaton
&#10005

(* just the first 10 steps *)

CellularAutomaton[30, {{1}, 0}, 10][[All, 11 ;;]] // ArrayPlot

See how what was previously the right side of the evolution’s triangle is now the leading diagonal of the matrix, which is what will be picked out by Diagonal (if Diagonal is given only one argument, it defaults to the leading diagonal.)

Third Place: Sander (Solved in 5m 50s)

out = MorphologicalComponents
&#10005

out = MorphologicalComponents[
   Image[CellularAutomaton[30, {{1}, 0}, 2000]], 
   CornerNeighbors -> False];
DeleteDuplicates[Table[out[[i, 2000 + i]], {i, 2001}]] // Length

Sander reproduces the functionality of Diagonal using Table to select for each row n the element in that row n – 1 positions to the right of the center. You’ll also note that they use DeleteDuplicates instead of Union—these functions are nearly identical, except that DeleteDuplicates doesn’t sort the result. According to the documentation page for the former, this can make DeleteDuplicates over 100x faster than Union on very large lists! I’ll definitely remember that in the future when choosing between the two.

Question 3: Nearest Flag

Use the Nearest function to find the flag of another country that is the most similar to the flag of Entity[“Country”, “Italy”]. Return the flag’s country as a "Country" entity.

Nearest collects distance and similarity measurement functionality from various domains into a single powerful “superfunction.” Given a list of values l and a single value v, Nearest will return the value or values in l to which v is nearest. The cool part is that the values li and v can be any of several Wolfram Language data types, such as strings, colors, images or even geographic coordinates. In the case of images, Nearest uses by default the ImageDistance function to determine the (dis)similarity between two images.

This was my solution:

flags = DeleteMissing@EntityValue
&#10005

flags = DeleteMissing@
   EntityValue["Country", EntityProperty["Country", "FlagImage"], 
    "EntityAssociation"];
country = Entity["Country", "Italy"];
First@Nearest[KeyDrop[flags, country], flags[country]]

We start by getting the flags of every country in the world as an association (the Wolfram Language’s version of an associative array) where keys are "Country" entities and values are the flag of each country. We remove any key-value pairs with missing values (in this case, Antarctica is the only "Country" entity with no associated flag.) Then we call Nearest with an association of entity-to-flag pairs (with Italy removed—the closest flag to Italy is of course that of Italy, and the question says “flag of another country”) as the first argument and the image of Italy’s flag as the second. Nearest will compare the values (images) of the association to the second argument and return a list of the corresponding keys ("Country" entities) for those nearest to it. Finally, we extract the first element of that list (there would only be more than one element if there were multiple countries whose flags were all equally nearest to Italy, which is not the case here). We can take a look at Italy’s flag along with the three flags nearest to it:

Multicolumn
&#10005

Multicolumn[
 KeyValueMap[Labeled[#2, #1] &]@
  KeyTake[flags, Nearest[flags, flags[country], 10]][[;; 4]], 
 Appearance -> "Horizontal"]

First Place: Gerli (Solved in 2m 12s)

flags = EntityValue
&#10005

flags = EntityValue[
   DeleteCases[CountryData[], Entity["Country", "Italy"]], 
   "FlagImage"];
flag = Nearest[flags, Entity["Country", "Italy"]["FlagImage"]] // 
   First;
DeleteCases[CountryData[], Entity["Country", "Italy"]][[
 Position[flags, flag][[1, 1]]]]

Gerli comes in first, again with a solution very similar to mine. The main difference is that they’re calling Nearest with a list of images, looking up the position of the resulting nearest flag in the master list of flags and finally using that position to locate the country it corresponds to.

Second Place: mohammadb (Solved in 3m 24s)

Nearest
&#10005

Nearest[EntityValue[EntityClass["Country", "Countries"], "FlagImage", 
   "EntityAssociation"], 
  Entity["Country", "Italy"][EntityProperty["Country", "Flag"]], 
  2][[2]]

“mohammadb” produces a very tidy one-liner, cleverly tackling the challenge of excluding Italy from the results by asking for the two nearest flags to that of Italy (using the third argument to Nearest), which returns a list containing Italy and Mexico, and simply extracting the second element of this list.

Third Place: Sander (Solved in 4m 14s)

db = Thread
&#10005

db = Thread[(Rasterize /@ EntityValue[CountryData[], "Flag"]) -> 
    CountryData[]];
Nearest[db, CloudGet["https://wolfr.am/HXqLOdBG"], 2][[2]]

Sander chose to call Nearest with a list of rules instead of an association (both are equally valid), and they used the same technique as mohammadb to exclude Italy from the result. However, they were unfortunately caught by a bit of a pitfall regarding "Country" entity properties.

If you run Sander’s code, you’ll probably notice that it’s significantly slower than the other solutions to this question, taking about 30 seconds to evaluate on my computer. The reason for this boils down to Sander’s use of the "Flag" property instead of "FlagImage". The latter, as used in the other solutions, returns a Wolfram Language Image object, which can be passed directly into Nearest. "Flag", however, returns a Graphics object, which is the Wolfram Language’s symbolic representation of vector graphics. Vector graphics can be greatly advantageous over raster images for some applications—they can be scaled up without losing fidelity, and for simple graphics they’re often smaller in byte count than a raster of comparable resolution. However, Graphics objects have to be rasterized with the Rasterize function to be used with functions (such as Nearest) that expect raster images, and this can be quite slow with the complex flags of certain countries:

AbsoluteTiming
&#10005

AbsoluteTiming[
 Entity["Country", "FalklandIslands"]["Flag"] // Rasterize]

And in a time-trial situation such as this livecoding competition, a few dozen extra seconds spent evaluating can mean the difference between second and third place.

(You might notice that mohammadb’s solution also uses the "Flag" property—but only for getting the flag of Italy, which is simple and rasterizes quickly. The flags for the rest of the countries are obtained with "FlagImage".)

Question 4: Random Walk Zero-Crossings

Generate 3,000 steps of a RandomWalkProcess[.5], and count the number of positive-trending zero-crossings in the walk. For example, {–x, 0, +x} is considered a positive-trending zero-crossing, while {+x, 0, –x} is considered negative-trending.

Reset the pseudorandom number generator with SeedRandom[1989] prior to generating the random walk. Return an integer.

A RandomWalkProcess is a type of random process supported in the Wolfram Language: RandomWalkProcess[.5] represents a one-dimensional random walk starting at 0 where at each step there’s an equal chance of incrementing or decrementing by 1. Here’s my solution:

SeedRandom
&#10005

SeedRandom[1989];
data = RandomFunction[RandomWalkProcess[.5], {0, 3000}]["Values"];
SequenceCount[data, {_?Negative, 0, _?Positive}]

We start by seeding the pseudorandom number generator with the integer 1989 as directed in the question, so as to ensure deterministic results. Next we evaluate the RandomWalkProcess for 3,000 steps using RandomFunction. This returns a TemporalData object, which represents a collection of time series—we can get just the values of the first time series from the "Values" property.

We then use the SequenceCount function to count the number of times a certain three-element pattern sequence occurs in the data (that pattern being {_?Negative,0,_?Positive}—a negative value, followed by zero, followed by a positive value).

Let’s plot the random walk as a line:

ListLinePlot
&#10005

ListLinePlot[data, AspectRatio -> 1/4, ImageSize -> Full]

The question is asking us to determine how many times that line crosses the x axis in an upward direction (that is, from the negatives to the positives).

First Place: Sander (Solved in 2m 26s)

SeedRandom
&#10005

SeedRandom[1989]
parts = Partition[
   RandomFunction[RandomWalkProcess[.5], {0, 3000}]["Values"], 3, 1];
Count[parts, {-1, 0, 1}]

Sander replicates the functionality of SequenceCount by using Partition with an offset of 1. This will return a list of all length-3 sublists in a list:

Partition
&#10005

Partition[{1, 2, 3, 4, 5, 6, 7}, 3, 1]

Sander then simply counts the number of times {-1,0,1} occurs in this list of subsets—since the RandomWalkProcess always moves in steps of one, there’s really no need for the pattern-based approach I used.

Second Place: Jon McLoone (Solved in 2m 51s)

SeedRandom
&#10005

SeedRandom[1989]; data = 
 RandomFunction[RandomWalkProcess[0.5], {1, 3000}]; Length[
 SequenceCases[Last /@ data["Path"], {x_?Negative, 0, y_?Positive}]]

Jon’s solution is very similar to mine—the only difference is a slightly different (but equivalent) method of accessing the contents of the TemporalData object returned from RandomFunction, and the use of Length[SequenceCases[...]] instead of SequenceCount (again, equivalent).

Third Place: TotalAnnihilation (Solved in 3m 10s)

SequencePosition
&#10005

SequencePosition[Normal[SeedRandom[1989];
    RandomFunction[RandomWalkProcess[.5], {0, 2999}]][[1, 
   All, -1]], {-1, 0, 1}] // Length

“TotalAnnihilation” chose to get the values out of the TemporalData object by using the Normal function combined with a Part specifier. They also used Length[SequencePosition[...]] in the same way that Jon used Length[SequenceCases[...]].

Question 5: State Chain

The historic Lewis and Clark Expedition of 1804–1806 began near St. Louis, Missouri, and reached the Pacific coast just southwest of Astoria, Oregon.

Find the shortest chain of bordering US states that connects the states of Entity[“AdministrativeDivision”, {“Missouri”, “UnitedStates”}]Missouri and Entity[“AdministrativeDivision”, {“Oregon”, “UnitedStates”}]Oregon, starting in the former and ending in the latter. For example, the shortest chain connecting Indiana and Tennessee is {Entity["AdministrativeDivision", {"Indiana", "UnitedStates"}]Indiana, Entity["AdministrativeDivision", {"Kentucky", "UnitedStates"}]Kentucky, Entity["AdministrativeDivision", {"Tennessee", "UnitedStates"}]Tennessee }. Return a list of "AdministrativeDivision" entities.

The mention of St. Louis and Astoria in the question text is just a bit of historical context; they aren’t actually relevant to the core question, which is purely topological. An obvious approach, then, is to represent the network of bordering states as a graph, which is exactly what I did in my solution:

from = Entity
&#10005

from = Entity[
   "AdministrativeDivision", {"Missouri", "UnitedStates"}];
to = Entity["AdministrativeDivision", {"Oregon", "UnitedStates"}];
g = Graph[
   Flatten[KeyValueMap[Thread[UndirectedEdge[#1, #2]] &, 
     EntityValue[
      EntityClass["AdministrativeDivision", "USStatesAllStates"], 
      "BorderingStates", "EntityAssociation"]]]];
path = FindShortestPath[g, from, to]

The "BorderingStates" property of an "AdministrativeDivision" entity representing a US state contains a list of other states (also as "AdministrativeDivision" entities) that share borders with the first state. After getting the value of this property for each US state, it’s just a matter of creating graph edges from each state vertex to each of the states it borders, which can then be used to create a graph:

g
&#10005

g

This can then simply be supplied to FindShortestPath to find the shortest chain between any two vertices. We can plot this chain of states on a map:

GeoGraphics
&#10005

GeoGraphics[{PointSize[Medium], Green, GeoPath[path], Blue, 
  Point /@ Union@Flatten@path, Red, Point[from], Point[to]}, 
 GeoRange -> "Country"]

I should note that FindShortestPath will always return a single shortest path if one exists, even if there is more than one path of that shortest length. In this case there’s only one shortest path, so it’s not an issue, but for completeness one might prefer to use FindPath, which can be configured to find all paths of a given length.

First Place: TotalAnnihilation (Solved in 2m 56s)

FindShortestPath
&#10005

FindShortestPath[
 Graph[Join @@ (Thread /@ 
     Normal[EntityValue[
       EntityClass["AdministrativeDivision", "ContinentalUSStates"], 
       EntityProperty["AdministrativeDivision", "BorderingStates"], 
       "EntityAssociation"]])], 
 Entity["AdministrativeDivision", {"Missouri", "UnitedStates"}], 
 Entity["AdministrativeDivision", {"Oregon", "UnitedStates"}]]

TotalAnnihilation squeezes into first place with a very economical solution that uses Normal on the association returned by EntityValue to convert it to a list of rules, with states on the left-hand sides and lists of bordering states on the right-hand sides. They then cleverly map Thread over that list to produce a nested list of entity-to-entity rules, which are then flattened with Apply[Join] (Join@@), like this:

l = {"State 1" -> {"State 2", "State 3"}, "State 4" -> {"State 5", "State 6"}}
&#10005

l = {"State 1" -> {"State 2", "State 3"}, 
  "State 4" -> {"State 5", "State 6"}}

Thread /@ l
&#10005

Thread /@ l

Join @@ %
&#10005

Join @@ %

I’ve never seen Thread on its own mapped over a list like that—nice trick!

Second Place: Gerli (Solved in 2m 58s)

start = Entity
&#10005

start = Entity["AdministrativeDivision", {"Missouri", "UnitedStates"}];
end = Entity["AdministrativeDivision", {"Oregon", "UnitedStates"}];
states = Entity["Country", "UnitedStates"][
   "AdministrativeDivisions"];
bordergraph = 
  Flatten[Function[state, state -> # & /@ state["BorderingStates"]] /@
     states];
FindShortestPath[Graph[bordergraph], start, end]

Gerli’s solution involves a pure function with a named argument, specified with Function. This is often necessary if one wants to use Map “inside” another Map, as with &/@ syntax the Slot arguments to the inner Map will override those on the outer—the other solutions work around this by using Thread on the inside instead of Map.

Third Place: Sander (Solved in 4m 19s)

states = EntityList
&#10005

states = EntityList[
   EntityClass["AdministrativeDivision", "AllUSStatesPlusDC"]];

bla = Transpose
&#10005

bla = Transpose[{states, EntityValue[states, "BorderingStates"]}];
conn = Join @@ (Thread[#1 -> #2] & @@@ bla);
FindShortestPath[conn, 
 Entity["AdministrativeDivision", {"Missouri", "UnitedStates"}], 
 Entity["AdministrativeDivision", {"Oregon", "UnitedStates"}]]

We can see from the use of Transpose that Sander chooses to work with a list of two-element sublists instead of an association or a list of rules. This lets them use Apply at level 1 (@@@), where the first and second elements of each sublist become respectively the first and second arguments to Thread[#1 #2]&.

Question 6: JoinAcross without JoinAcross

Given the following definition of args, reimplement the basic functionality of JoinAcross, without using JoinAcross, as myJoinAcross, such that JoinAcross@@args===myJoinAcross@@args, and return myJoinAcross@@args (a list of associations).

args = {{<|a -> 1, b -> W|>, <|a -> 2, b -> X|>},
{<|a -> 1, c -> Y|>, <|a -> 2, c -> Z|>}, Key[a]};

This question is a little different from the others. Here, instead of producing a specific answer to a question, we’re being asked to define a relatively general function, and then prove that it works by comparing its output with that of the built-in function it’s supposed to replicate. (The part about returning JoinAcross@@args===myJoinAcross@@args is that proof—the three elements of the list args are to be given to JoinAcross/myJoinAcross as arguments, and @@ [Apply] is just a convenient way of using the elements of a list as arguments to a function.)

Let’s first get a sense of what JoinAcross (the built-in function) does:

JoinAcross
&#10005

(* this is the same as JoinAcross@@args *)
JoinAcross[
 {<|a -> 1, b -> W|>, <|a -> 2, b -> X|>},
 {<|a -> 1, c -> Y|>, <|a -> 2, c -> Z|>},
 Key[a]
 ]

As the documentation page for JoinAcross notes, it is very much the analogue of the SQL JOIN operation, merging the values (“rows”) in two lists of associations (“tables”) where they have the value of a certain key (“column”) in common. In this example, the first associations in each list have the key-value pair a 1 in common, so they get merged, and the second associations have a 2 in common, so they get merged. (As the question text notes, only the most basic functionality of JoinAcross is concerned here—the built-in function supports different types of joins, as well as specifying behavior for key collisions.) This is my solution:

myJoinAcross
&#10005

myJoinAcross[alist_, blist_, joinkey_] := 
 Join @@ Append[
     Select[alist, Lookup[joinkey] /* EqualTo[joinkey[#]]], #] & /@ 
  blist

myJoinAcross
&#10005

myJoinAcross[
 {<|a -> 1, b -> W|>, <|a -> 2, b -> X|>},
 {<|a -> 1, c -> Y|>, <|a -> 2, c -> Z|>},
 Key[a]
 ]

We’re mapping over the values in blist, so for each association in that list we append it to a list of all the associations in alist that share the value of key joinkey in the association from blist and then merge those associations into one. Let’s see what that looks like for just one of the values in blist (a single iteration of the map):

alist
&#10005

alist = {<|a -> 1, b -> W|>, <|a -> 2, b -> X|>};
blistVal = <|a -> 1, c -> Y|>;
joinkey = Key[a];

Select[alist, Lookup[joinkey] /* EqualTo[joinkey[blistVal]]]

Append[%,blistVal]
&#10005

Append[%, blistVal]

Join @@ %
&#10005

Join @@ %

The second argument of Select might look a little unusual—it’s using RightComposition to compose Lookup and EqualTo into a sort of ad hoc function. Also, the argument to EqualTo uses the operator form of Key (remember, joinkey is Key[a]). Both of these are just stylistic choices; they’re functionally equivalent to this:

Select
&#10005

Select[alist, #[[joinkey]] == blistVal[[joinkey]] &]

I’ll note that mine is actually quite a fragile solution, as it doesn’t react to changes in the order of association elements in the same way JoinAcross does:

newArgs
&#10005

(* flipped around the elements in the first association - note how the result is no longer the same as that from JoinAcross *)
newArgs = {
   {<|b -> W, a -> 1|>, <|a -> 2, b -> X|>},
   {<|a -> 1, c -> Y|>, <|a -> 2, c -> Z|>},
   Key[a]
   };
myJoinAcross @@ newArgs
JoinAcross @@ newArgs

First Place: timotheev (Solved in 5m 04s)

myJoinAcross
&#10005

myJoinAcross[
   {<|a -> 1, b -> W|>, <|a -> 2, b -> X|>},
   {<|a -> 1, c -> Y|>, <|a -> 2, c -> Z|>},
   Key[a]
   ] := {<|a -> 1, b -> W, c -> Y|>, <|a -> 2, b -> X, c -> Z|>};

Hold on a minute, that doesn’t look right! Yes, while the wily “timotheev” has indeed defined a function that returns the expected result, they’ve done so by hard-coding that result on the right-hand side of the function!

Normally our judges would discard this as an invalid solution. However, a careful reading of the question text shows that the criterion of success for the instruction “reimplement the basic functionality of JoinAcross, without using JoinAcross, as myJoinAcross” is explicitly qualified as the statement JoinAcross@@args===myJoinAcross@@args, and timotheev’s solution does indeed satisfy that criterion, so the judges chose to mark their solution as correct. Congratulations to timotheev on adept exploitation of a subtle loophole!

Second Place: TotalAnnihilation (Solved in 5m 09s)

myJoinAcross
&#10005

myJoinAcross[list1_, list2_, key_] := 
 Join @@@ Select[Tuples[{list1, list2}], key@#[[1]] === key@#[[2]] &] 

TotalAnnihilation comes in second just five seconds after timotheev (but with a “real” solution this time!). Their solution is somewhat structurally similar to mine, but they cut out a step by using Tuples to iteratively form all possible pairs of one element from list1 and one element from list2:

{list1, list2} = args
&#10005

{list1, list2} = args[[;; 2]];
Column[tuples = Tuples[{list1, list2}]]

… and then filtering down the list to only those pairs where the values of the join key match:

key = Key[a];
&#10005

key = Key[a];
Select[tuples, key@#[[1]] === key@#[[2]] &]

… after which they merge these lists by applying Join at level 1:

Join @@@ %
&#10005

Join @@@ %

Third Place: Jose (Solved in 6m 05s)

myJoinAcross
&#10005

myJoinAcross[list1_, list2_, key_] := 
 Join @@@ GatherBy[Join[list1, list2], key]

Jose’s solution is impressively succinct. They concatenate list1 and list2 right at the start, and then use GatherBy to collect associations with the same value for the join key into sublists:

Join[list1, list2]
&#10005

Join[list1, list2]

GatherBy[%, key]
&#10005

GatherBy[%, key]

… and they merge the elements of the sublists, just as TotalAnnihilation did:

Join @@@ %
&#10005

Join @@@ %

Excellent work, Jose!

Question 7: Adding Parentheses

Given the string "4-2*3+4", find all the parenthetical groupings of exactly two subexpressions at a time for which the final expression evaluates to a positive number. Return a list of strings that has been passed through Sort.

For example, the string "5+6*7-8" would have an expected output of {"(((5+6)*7)-8)","((5+(6*7))-8)","(5+((6*7)-8))"}.

Our final regular question of the evening was definitely the most challenging for our contestants—it’s not too tricky to understand, but solving it and returning a string result requires some non-obvious techniques. Here’s my solution:

str = "4-2*3+4";
&#10005

str = "4-2*3+4";
operators = StringCases[str, Except[DigitCharacter]];
nums = StringSplit[str, operators];
groupings = 
  Flatten /@ Groupings[nums, ({"(", #1, Delimiter, #2, ")"} &) -> 2];
groupingstrs = 
  StringJoin[
     ReplacePart[#, 
      Rule @@@ Transpose[{Position[#, Delimiter], operators}]]] & /@ 
   groupings;
Sort@Select[groupingstrs, ToExpression[#] > 0 &]

Let’s go through it line by line:

str = "4-2*3+4";
&#10005

str = "4-2*3+4";
operators = StringCases[str, Except[DigitCharacter]]

We start by assigning the string from the question text to a variable, and then we pick out the characters representing arithmetic operators—I chose to do this simply by asking for all characters in the string that aren’t digits:

nums = StringSplit[str, operators]
&#10005

nums = StringSplit[str, operators]

Then we perform the inverse operation in getting just the digits in the string—I do this by splitting the string at each occurrence of an operator:

groupings
&#10005

groupings = 
 Flatten /@ Groupings[nums, ({"(", #1, Delimiter, #2, ")"} &) -> 2]

Groupings is the “linchpin” function that all solutions to this question will likely have in common—it performs the core task of producing all possible groupings of elements in a list. We can see how it works with a simpler example:

Groupings[{1,2,3,4},2]//Column
&#10005

Groupings[{1, 2, 3, 4}, 2] // Column

This is asking for all possible length-2 groupings of the elements in the list {1,2,3,4}. Notice that every sublist in the result contains exactly two elements—for example, {1,{2,3},4} isn’t allowed because it has three elements at level 1, but {{1,{2,3}},4} is allowed because level 1 contains two elements, {1,{2,3}} and 4.

The syntax I’m using, with a rule as the second argument to Groupings, lets us apply a function to each grouping before putting it back in place. We can see this better using a dummy function:

Groupings[{1,2,3,4},2]//Column
&#10005

Groupings[{1, 2, 3, 4}, f -> 2] // Column

Notice that instead of putting each grouping in a list, the elements of the grouping are used as arguments to the function f that we provided. (Put another way, Groupings[{1,2,3,4},2] is equivalent to Groupings[{1,2,3,4},List 2].)

The function I provide, ({"(",#1,Delimiter,#2,")"}&), returns for each grouping of two elements a list containing, in order, the string "(", the first element, the symbol Delimiter, the second element and finally the string ")". We then flatten out each set of groupings—we no longer need the nested lists to provide structure because the parentheses are now in the correct places.

The symbol Delimiter has a special meaning when used with certain functions, as described on its documentation page, but here I’m just using it as a convenient placeholder, to be replaced with an operator later.

At this stage, we can look at what the groupings look like as strings, inserting a dot at the position of every operator:

StringJoin
&#10005

StringJoin /@ (groupings /. Delimiter -> "\[CenterDot]") // Column

It’s clear now that our next step is to put the correct operators in their correct positions. We already have our list {"-","*","+"} of operators, so for each “candidate” grouping we want to replace the first Delimiter with "-", the second with "*" and the third with "+". I do that with this line:

groupingstrs
&#10005

groupingstrs = 
 StringJoin[
    ReplacePart[#, 
     Rule @@@ Transpose[{Position[#, Delimiter], operators}]]] & /@ 
  groupings

To break that down, let’s look at just one grouping:

candidateGrouping = groupings[[1]]
&#10005

candidateGrouping = groupings[[1]]

We first get the indices of Delimiter in that list using Position:

Position[candidateGrouping, Delimiter]
&#10005

Position[candidateGrouping, Delimiter]

… transpose that with the list of operators:

Transpose[{%, operators}]
&#10005

Transpose[{%, operators}]

… and convert each pair into a rule:

Rule @@@ %
&#10005

Rule @@@ %

This list of rules can then be used with ReplacePart, which replaces the elements at the specified indices in an expression with specified values:

ReplacePart[candidateGrouping, %]
&#10005

ReplacePart[candidateGrouping, %]

And finally concatenate those characters together into a string:

StringJoin@%
&#10005

StringJoin@%

Those last five steps get applied to each candidate grouping, yielding a list of candidate strings:

groupingstrs // Column
&#10005

groupingstrs // Column

Great! Now all we need to do is pick out the strings that evaluate to a positive number and sort them, as the question instructs:

Sort@Select
&#10005

Sort@Select[groupingstrs, ToExpression[#] > 0 &]

ToExpression takes a string and evaluates it as Wolfram Language input, just as if you had pasted it into a notebook and evaluated it:

ToExpression
&#10005

ToExpression["(((4-2)*3)+4)"]

Be careful with ToExpression—using it with input from external sources can pose a security risk, as strings evaluated as code can do any number of malicious things. Here, however, the strings in question are safe because we’ve constructed them from a safe initial string, so ToExpression is fine (and indeed, quite useful).

First Place: Bandit (Solved in 17m 40s)

string = "4-2*3+4";
&#10005

string = "4-2*3+4";
Quiet@Sort@
  StringReplace[
   With[{f = (StringReplace[#, {"{" -> "(", "}" -> ")", 
           "," -> ""}] & /@ 
        ToString /@ Groupings[StringPartition[string, 1], 3])}, 
    Extract[f, Position[ToExpression /@ f, _?Positive]]], " " -> ""]

“Bandit” takes a somewhat brute-force approach to this question. They start with this code:

Groupings[StringPartition[string, 1], 3]
&#10005

Groupings[StringPartition[string, 1], 3]

They then convert each grouping into a string:

ToString /@ %
&#10005

ToString /@ %

The output looks identical, but all of the curly brackets and commas are now inside strings—Bandit is using the Wolfram Language list syntax to encode grouping semantics in their strings. Smart approach! Next they replace each curly bracket with a corresponding parenthesis and remove all the commas:

f = StringReplace
&#10005

f = StringReplace[#, {"{" -> "(", "}" -> ")", "," -> ""}] & /@ %

Then they extract all the groupings that evaluate to a positive value:

Extract
&#10005

Extract[f, Position[ToExpression /@ f, _?Positive]]

… and finally remove the spaces from the results and sort them:

Sort@StringReplace
&#10005

Sort@StringReplace[%, " " -> ""]

Astute readers might notice that some of the candidate groupings (prior to filtering nonpositive ones) produced by Bandit’s method are invalid—for example, "((4-2)(*3+)4)" has parentheses in syntactically prohibited positions. Indeed, if you evaluate the previous step Extract[f,Position[ToExpression/@f,_?Positive]] in a notebook you’ll see error messages produced as ToExpression fails to evaluate the candidates with invalid syntax. But a failed ToExpression call will return $Failed, which is not Positive, so their filtering still works fine—and the Quiet at the top level of their solution simply silences the error messages that bubble up from ToExpression. This “let it fail, then ignore it” technique is why I earlier referred to Bandit’s approach as brute-force—but it works, which is what matters here, and they landed in first place for this question, so who’s to complain? Great job, Bandit!

Second Place: degenerateArtist (Solved in 21m 00s)

numbers = ToExpression
&#10005

numbers = 
  ToExpression[Characters[StringTake["4-2*3+4", {1, -1, 2}]]];
ops = StringTake["4-2*3+4", {2, -1, 2}];
groups = Groupings[numbers, 2];
strings = 
  StringReplace[
     StringJoin @@ 
      Riffle[StringSplit[
        StringReplace[ToString[#], {"{" -> "(", "}" -> ")"}], ","], 
       Characters[ops]], " " -> ""] & /@ groups;
Sort[Select[strings, Positive@*ToExpression]]

degenerateArtist’s solution is something of a blend between mine and Bandit’s. They start by separating the characters of the string into a list of numbers and a list of operators, as I did, which lets them use Groupings[numbers,2] on the list of numbers. This obviates the need to deal with the invalid syntax that Bandit encountered. However, degenerateArtist uses the same technique Bandit employed to stringify the lists of groupings and convert curly brackets to parentheses. But instead of removing commas, they cleverly split the string at each comma, which lets them elegantly Riffle the operators into the correct positions:

candidateGroup = groups
&#10005

candidateGroup = groups[[1]];
StringSplit[
 StringReplace[
  ToString[candidateGroup], {"{" -> "(", "}" -> ")"}], ","]

Riffle[%, Caraters[ops]]
&#10005

Riffle[%, Characters[ops]]

StringJoin @@ %
&#10005

StringJoin @@ %

Very nice! Then all that’s left is removing leftover spaces, picking out the positive expressions (in roughly the same way I did) and sorting.

Third Place: ysh (Solved in 23m 21s)

StringReplace
&#10005

StringReplace[#, " " -> ""] & /@ 
  Cases[(StringReplace[#, {"," -> "+"}, 
       1] & /@ (StringReplace[#, "," -> "*", 
         1] & /@ (StringReplace[#, "," -> "-", 
           1] & /@ (StringReplace[#, {"{" -> "(", 
              "}" -> ")"}] & /@ (ToString /@ 
             Groupings[{4, 2, 3, 4}, 2]))))), 
   ele_ /; ToExpression[ele] > 0] // Sort

Contestant “ysh” creates their candidate groupings and converts them to strings in the same way denegerateArtist did, but they approach the step of inserting operators from a different angle. Since their solution only needs to work on the specific input string given as part of the question, they know in advance where each operator should go, so they need only apply a sequence of StringReplace operations to replace each successive comma with the appropriate operator.

candidates
&#10005

candidates = 
 StringReplace[#, {"{" -> "(", "}" -> ")"}] & /@ (ToString /@ 
    Groupings[{4, 2, 3, 4}, 2])

First iteration:

StringReplace
&#10005

StringReplace[#, "," -> "-", 1] & /@ %

Second:

StringReplace
&#10005

StringReplace[#, "," -> "*", 1] & /@ %

And third:

StringReplace
&#10005

StringReplace[#, {"," -> "+"}, 1] & /@ %

Then, as with degenerateArtist’s solution, they simply remove the extraneous spaces, pick out positive expressions and sort.

At this point, with our seven questions (and three judges!) exhausted, the top of the leaderboard looked like this:

  • Gerli: 14 points
  • Sander: 10 points
  • TotalAnnihilation: 10 points
  • Jon McLoone: 8 points

Gerli’s ahead by a decisive margin, but there’s a tie for second place! Fortunately, we had predesignated a special tiebreaker question for just such a scenario—let’s see who solved it first!

Tiebreaker Question: Full Moons on Halloween

Out of the next 100 years (2019 through 2119), find those in which the Moon is in the Entity[“MoonPhase”, “Full”]Full moon phase at 11pm EDT (GMT-4) on Halloween (October 31). Return a list of years as integers in ascending order.

For this tiebreaker question, only the contenders for second place, Sander and TotalAnnihilation, were to submit solutions, and the first to submit a correct solution would receive a single point to push them into second place while the other would end up in third place. Here’s my solution:

Block
&#10005

Block[{$TimeZone = -4},
 thisYearHalloween = 
  DateObject[{DateValue[Today, "Year"], 10, 31, 23}];
 nextHalloween = 
  If[Today >= thisYearHalloween, 
   DatePlus[thisYearHalloween, Quantity[1, "Years"]], 
   thisYearHalloween];
 phases = 
  AssociationMap[MoonPhase[#, "Name"] &, 
   DateRange[nextHalloween, 
    DatePlus[nextHalloween, Quantity[1, "Centuries"]], "Year"]];
 Sort@DateValue[
   Keys@Select[phases, EqualTo@Entity["MoonPhase", "Full"]], "Year"]
 ]

Since the question specifies times in Eastern Daylight Time, I start by wrapping the whole solution in a Block with $TimeZone set to –4, the GMT offset for EDT. Working with the correct time zone is important here, because the phase of the Moon changes throughout the night. All our contestants were physically in Champaign, Illinois (the location of the Wolfram Technology Conference), where the local time zone at the time of the competition was Central Daylight Time (GMT-5), so a contestant leaving the time zone set to their system default could cause them to arrive at an incorrect answer. By locally setting $TimeZone to –4, all date/time functions inside the Block will use that time zone unless instructed otherwise.

Inside my Block, I set thisYearHalloween to a DateObject representing 11pm on October 31 of the current year:

thisYearHalloween
&#10005

thisYearHalloween = DateObject[{DateValue[Today, "Year"], 10, 31, 23}]

Then, if the current date is past this year’s Halloween, I set nextHalloween to next year’s Halloween; otherwise, I set it to this year’s:

nextHalloween
&#10005

nextHalloween = 
 If[Today >= thisYearHalloween, 
  DatePlus[thisYearHalloween, Quantity[1, "Years"]], 
  thisYearHalloween]

Now the value of nextHalloween is 11pm on the next Halloween to occur, regardless of whether the current date is before or after October 31 of the current year. This is actually not such a great thing for solving this particular question—I wanted my solution to always work relative to the current year, but the question text specifies explicit start and end years, so I really should have used those instead.

Next I use DateRange to create a list of the Halloweens between nextHalloween and 100 years after that date:

DateRange
&#10005

DateRange[nextHalloween, 
  DatePlus[nextHalloween, Quantity[1, "Centuries"]], "Year"] // 
 Short[#, 3] &

Now I want a mapping of each Halloween-at-11pm DateObject to the Moon phase at that time, so I use AssociationMap to produce an association:

phases
&#10005

(phases = AssociationMap[MoonPhase[#, "Name"] &, %]) // Short[#, 3] &

That line uses MoonPhase to determine the phase of the Moon at a given time. By specifying "Name" as the second argument I get a "MoonPhase" entity back, which I can later compare with that given in the question. Using entity comparison is important here, because the Wolfram Language’s threshold of illumination for a full Moon is slightly less than 100%, so simply checking if the illumination fraction is equal to 1 won’t produce the expected result.

Next we filter the association to contain only dates where the Moon phase is full:

Select
&#10005

Select[phases, EqualTo@Entity["MoonPhase", "Full"]]

I use EqualTo here, which is an operator form of Equal. EqualTo[x][y] is equivalent to x==y.

Finally, we get the keys (dates) from the association:

Keys@%
&#10005

Keys@%

… and get the year of each (they should already be in ascending order, but we sort them just to be sure):

Sort@DateValue[%, "Year"]
&#10005

Sort@DateValue[%, "Year"]

Tiebreaker Winner: TotalAnnihilation (Solved in 3m 33s)

Sort@Cases
&#10005

Sort@Cases[# -> MoonPhase[#, "Name"] & /@ 
   DateRange[
    DateObject[{2019, 10, 31, 23}, TimeZone -> "America/New_York"], 
    DateObject[{2119, 10, 31, 23}, TimeZone -> "America/New_York"], 
    "Year"], 
  Verbatim[Rule][a_,  Entity["MoonPhase", "Full"]] :> a["Year"]]

TotalAnnihilation snags first on the tiebreaker, putting them in second place overall. They specify the time zone string "America/New_York" instead of an hour offset from GMT like I did. Such a string tells DateObject to choose standard or daylight time based on the date, so DateObject[{2019,10,31,23},TimeZone "America/New_York"] would be interpreted as EDT while DateObject[{2019,12,31,23},TimeZone "America/New_York"] would be interpreted as EST.

Tiebreaker Question: Full Moons on Halloween

TotalAnnihilation constructs a list of rules of DateObject to "MoonPhase" entity, similar to my association:

 MoonPhase
&#10005

# -> MoonPhase[#, "Name"] & /@ 
  DateRange[
   DateObject[{2019, 10, 31, 23}, "Hour", "Gregorian", 
    "America/New_York"], 
   DateObject[{2119, 10, 31, 23}, "Hour", "Gregorian", 
    "America/New_York"], "Year"] // Short[#, 3] &

They then use Cases to pick out all the rules with a full Moon "MoonPhase" entity on the right-hand side, and extract the year from the date on the left-hand side of each:

Cases
&#10005

Cases[%, Verbatim[Rule][a_,  Entity["MoonPhase", "Full"]] :> 
  a["Year"]]

Tiebreaker Runner-Up: Sander (Solved in 4m 37s)

 MoonPhase
&#10005

{#, MoonPhase[#, "Name"]} & /@ 
  DateRange[DateObject[{2019, 10, 31, 23}, TimeZone -> -4], 
   DateObject[{2119, 10, 31, 23}, TimeZone -> -4], "Year"];
Cases[%, {_, Entity["MoonPhase", "Full"]}][[All, 1, 1]][[All, 1]]

Sander’s solution is quite similar to TotalAnnihilation’s. The only substantial differences are that Sander used an hour offset for TimeZone like I did, and that they formatted their pairs of dates and "MoonPhase" entities into a list of length-2 sublists (which they then pattern-match against), as opposed to TotalAnnihilation’s list of rules or my association.

Final Results

Here’s what the leaderboard looked like at the end of the competition:

  • Gerli: 14 points
  • TotalAnnihilation: 11 points
  • Sander: 10 points
  • Jon McLoone: 8 points
  • degenerateArtist: 6 points
  • Bandit: 5 points
  • timotheev: 4 points
  • jose: 4 points
  • ysh: 3 points
  • mohammadb: 3 points
  • <3 Wolfie: 3 points
  • etienneb: 1 point
  • dudu: 1 point
  • cheeseburger: 1 point

Then the contestants behind the screen names revealed themselves to claim their prizes!

Third place: Screen name “Sander” belonged to Sander Huisman, professor at the University of Twente in the Netherlands and Wolfram Function Repository contributor extraordinaire. Great job, Sander!

Third-place finisher, Sander Huisman

Sander Huisman hard at work on question 7.

Second place: “TotalAnnihilation” was Carlo Barbieri, consultant in Wolfram’s Engine Connectivity Engineering department. Carlo came in second in last year’s competition too—he says he’ll be aiming for first next year. Excellent work, Carlo!

Second-place finisher, Carlo Barbieri

Carlo Barbieri explaining to the audience how using a Wolfram-provided laptop for the competition has affected his coding speed.

First place: Finally, “Gerli” was unmasked as Gerli Jogeva, consultant on Wolfram’s European Sales team, and our first female Livecoding Champion. Gerli was especially pleased to have beaten Jon McLoone, the 2017 Livecoding Champion—and Gerli’s boss! Congratulations, Gerli!

2019 Livecoding Champion Gerli Jogeva (center) with Carlo Barbieri (right) and Sander Huisman (left)

Left to right: Sander Huisman, 2019 Livecoding Champion Gerli Jogeva, Carlo Barbieri.

I hope my breakdown of the questions and solutions has been informative for Wolfram Language learners and enthusiasts, and gives a glimpse of what it takes to be a Livecoding Champion! Incredible job by the winners, and a big thank you to all of this year’s competitors. I can’t wait to see you—and discover the 2020 Livecoding Champion—next year. Keep on coding!

Mark your calendars now and save the date for our next Wolfram Technology Conference, October 6–9, 2020!

Comments

Join the discussion

!Please enter your comment (at least 5 characters).

!Please enter your name.

!Please enter a valid email address.

8 comments

  1. Thanks for this post, Jesse. We all appreciate your effort to inform us and we know it took some time to put this together. I look forward to studying your content very carefully and learning more about WL (a never-ending challenge).

    Bruce

    Reply
    • Hi Bruce, thank you for your kind comment! Mastering the Wolfram Language is indeed a never-ending challenge that we all nonetheless choose to pursue. Best of luck, and remember the documentation and Wolfram Community are always there when you need them!

      Reply
  2. With the understanding that question 5 is purely and at core a topological question.
    Were the names of the Administrative division to also be historical?.

    https://en.wikipedia.org/wiki/List_of_U.S._states_by_date_of_admission_to_the_Union

    Missouri 1821, Nebraska 1867, Wyoming 1890, Idaho 1890, Oregon 1859

    Nevertheless, a great post.

    Reply
    • Hi Hans, good catch! You’re right that the use of current states in question 5 is something of an anachronism given the historical setting – I guess I should have said “Find the shortest chain of bordering modern-day US states…”.

      I’m glad you like the post!

      Reply
  3. Great post. Very informative. It looks like a lot of fun. I very much appreciate the detailed breakdown of solutions, as a helpful service to learn about more commands and their use and interaction. I particularly like it that the solutions submitted by all parties, for all problems, are no more than a handful of lines, and that the “better” solutions are often the clearest to understand.

    The trick to success at this event, it seems to me, is to know what Wolfram Language commands to use, and quickly. That’s quite the challenge, considering the staggering number of functions available — to winnow it down to a small number for a given question.

    I may want to participate in future LiveCoding Championships. Is the competition restricted only to conference participants, or is there a way to participate remotely? Also, Jesse: What advice would you give for folks who want to sharpen their skills in preparation for the LiveCoding Championship?

    Reply
    • Thanks for your comment, Mitchell! I’m very glad the post was helpful for you.

      So far the only livecoding competitions we’ve held have been at the Wolfram Technology Conference and at the Wolfram summer programs (Summer Camp and Summer School). There’s been some discussion of running virtual competitions that people can participate in remotely, but as far as I know there aren’t any concrete plans around that – sorry.

      As far as advice, I’d echo the words of last year’s Livecoding Champion (and this year’s co-MC) Chip Hurst: “read a new function every day in the documentation!” Going into the competition with extensive Wolfram Language knowledge is definitely an asset as you say, but it’s only part of the equation. All of our top competitors this year made heavy use of the documentation during the competition – I know, because I was watching the screens of four of them – so performing well isn’t exclusively a matter of having preexisting expertise. By reading the documentation you gain a feel for how reference pages are structured, so that during the competition you’ll have an intuitive sense of what to search for, what sections of each page to look at and which to skip, and how to parse the descriptions and examples you see – all of which will help you save precious seconds in writing your solutions.

      I hope to see you at a Livecoding Championship in the future!

      Reply
  4. Thank you for your fruitful post. Especially, “Country” Property vs. GeoIdentify is very useful for me. About a year ago, I tried to create a distribution of the countries of the people who take part of a community. There were many kinds of their location information (country, city, town, company, school, etc.). So it was difficult for me to extract the country name from it. GeoIdentify is great.
    Thank you again.

    Reply
    • Kotaro-san, I’m very happy that you found the section on GeoIdentify helpful. If you’re working with locations as strings, the various geographic Interpreter types might be useful too, like “Location”:

      In[1]:= Interpreter[“Location”] /@ {“Champaign, IL”, “Tokyo”,
      “Oxford University”, “Volkswagen”}

      Out[1]= {GeoPosition[{40.8758, -77.8023}],
      GeoPosition[{35.67, 139.77}], GeoPosition[{51.7586, -1.2557}],
      GeoPosition[{52.4326, 10.8039}]}

      You can then use GeoIdentify to find the country each GeoPosition is in:

      In[2]:= GeoIdentify[“Country”, %]

      In[2]:= GeoIdentify[“Country”, %]
      Out[2]= {{Entity[“Country”, “UnitedStates”]}, {Entity[“Country”,
      “Japan”]}, {Entity[“Country”, “UnitedKingdom”]}, {Entity[“Country”,
      “Germany”]}}

      Thank you for your comment!

      Reply