Do Computers Dumb Down Math Education?
Since I just heard that the video for Conrad Wolfram’s recent TED talk “Stop teaching calculating, start teaching math” will be coming out soon, I thought I would address the single biggest fear that I hear when I talk about using computers in math education.
The objection that using computers will “dumb down” education comes with the related ideas “students have to learn to do it by hand or how will they know they have got the right answer”, “they won’t understand what is happening unless they do it themselves”, and so on.
Well, let’s examine this by looking at a typical math question that I know I had to solve at some point in my education.
“The Second World War Gustav gun had a muzzle velocity of 820m/s (SI units used throughout). Assuming no air resistance, what was its range when fired at 45°?”
If I am using Mathematica, then I can pretty much type the differential equations and equations for the system and get the answer.
“Aha!” say the critics. “Proof that the computer has dumbed down the subject: that didn’t require any thought at all. You never solved the ODE; you never solved the equation; the computer did it all for you.”
Well, I do agree that this example is pretty dumb, though not because the computer did the computational work. The example is dumb because the answer is completely wrong.
The gun in question had a range of 48km, not the 68km that we just calculated. It wasn’t Mathematica‘s fault, and doing it by hand wouldn’t help—the equations are wrong. They are not wrong as far as the current education system is concerned; they would get top marks in my old school. They are wrong in the sense that they do not reflect reality.
The dumbing down was in the question, which explicitly excluded the atmosphere, and implicitly excluded any other influencing forces or complicating factors. It’s hard to imagine any scenario where that might be true.
The reason why that typical question is so dumbed down is that, without computers, it quickly becomes too hard to solve by hand. In an educational system geared to neat little hand-solvable problems, the only solution is to look at a toy version of the problem.
One key conceptual problem shown here is that, in education, assumptions are usually instructions. Instead, we should be teaching students that assumptions (even implicit ones) are choices. Each should be considered for the impact that it might have on the validity of the solution.
To illustrate the point, let’s look at a less dumbed down version of the same problem. The biggest missing factor is drag from the air.
Drag is given by this formula:
where pb is the base air density, A is effective area, and Cd is the drag coefficient (a measure of how streamlined the shape is). But drag as a force is applied against the direction of movement, so we need to resolve this into x and y components, based on x and y velocities.
And now let’s put that into our system. Already this has become a tricky problem that is too hard to compute in closed form (another real-world issue that students should understand). So I will solve numerically (essentially impossible by hand):
We are missing some information, so off I go to Wikipedia to gather some data on air and the Gustav gun.
Drag coefficient depends on the shape of the shell: streamlined can be as low as 0.04, a cube is 1.05, a sphere is 0.47. I am going to cheat here and claim it is 0.28 without any analysis or citation. You should deduct some marks for this!
There are multiple altitude effects that we can add in. We might ignore those if we are hitting a tennis ball, but this shell goes up 15km and air density falls significantly at that altitude. Here is a version of our Drag function that takes density as a parameter:
Now I need a model for how air density varies. This alone is a tough problem. We don’t want students perpetually trapped in solving preliminary steps from first principles, so they need to be able to use existing models.
Here is the model for air density and some key values, valid to about 11000m (I am going to assume that is close enough).
It is unreasonable to expect students to learn all such formulas—and I include simple ones such as sin(2θ)=2sin(θ)cos(θ) and the dozen or so long-forgotten variants that I was made to memorize. So that means an “open book” approach of looking them up as needed. But before spoon-feeding the student with the right formulas, remember that recognizing which model is appropriate for the information that they have and need, and figuring out how to fill in the parts of the model that are missing, is another important skill students should learn for the real world.
Gravity also decreases with altitude, and while I am looking things up, here is a better value for g at the surface:
It turns out that this effect is only about 0.3% at the top of the arc. But it is easy to add in, so I will use it.
Now I will add those into the system of equations, and I’ll also add a wind speed parameter while I’m at it (assuming constant wind speed at all altitudes, for the duration of flight).
And now we have our 48km range. It’s pretty clear that the model matters:
How Else Can We Be Less Dumb?
I am still a long way from the “truth”. I have ignored the curvature of the Earth, the fast high-altitude jet streams and gustiness of low-level wind, the Coriolis effect, atmospheric pressure, rainfall, and humidity, and I am working only in 2D. I’ve also ignored the fact that gravity varies between the equator and the poles due to the non-spherical Earth and rotation effects. There will even be a small eddy current effect in the metal as it passes through the Earth’s magnetic field and gravitational effects from the Moon, Sun, and other planets. A “perfect” model would be a major undertaking, perhaps worthy of a PhD thesis, and probably quite useless in practice.
I had to make choices to dumb this down to fit the size of a blog post, and had an intuition about which effects would give the best improvement in answer against the effort to implement. We should also consider the measurability of the parameters. These are the kinds of choices made in real-world modeling that we fail to teach people.
More importantly, I have spent no time here on validation tests. Are my models plausible? Was my intuition about what effects were insignificant correct? Did I type in the equations correctly? It is no longer a trivial task to check that I haven’t done something wrong, for student or teacher. I highly expect someone to point out a mistake that I have made somewhere in this post, and equally that most readers will not have noticed it. Validation is a skill we must teach. Does my model fit the simple model for low altitudes? Does it behave correctly if fired straight up or straight across, or at zero velocity or other known values?
It is not good enough to just check our work by hand. This is now a real problem, and like most real problems, it is tricky and messy and needs thought.
Once we are happy with our model, we could be asking much more interesting questions than what the range is, like, “How much difference does a wind speed of ±5m/s make?”
At a quick glance, we can see that it is a little over 300m.
“If the wind varies by 2mph, and muzzle velocity and launch angles have a standard deviation of 1%, what is the probability that we can hit a target with a 1km diameter?” (According to Wikipedia, the higher velocity Paris Gun had a range of 130km, but missed the whole of Paris with half its shots in the First World War).
The challenge for teachers who embrace computers as tools is to teach students to sensibly trade off complexity against correctness, to recognize what they need to know, to find out, to calculate, and how to validate. In short, to teach them to think rather than to perform computation procedures.
Perhaps more importantly, through interesting and challenging tasks, we must give students the confidence to work with problems that don’t have neat, little, pretend answers, but that are messy, have alternative approaches, and where perhaps no one knows the answer yet—just like the problems that they will face in the real world.