Skipping the hard problems

There is a problem with math education. Well, actually there is more than one, but here I’ll only talk about one, namely the one that has to do with skipping the hard problems.

When we’re handed a problem set in math class in elementary school or high school, our teachers often tell us: Don’t get stuck with a problem you find difficult to solve, move on to the next one. This is good advice, if the aim is to get a good grade, since we get better grades the more problems we solve correctly. But is it good advice if the aim is to become better mathematicians and prepare us for a career as research mathematicians? I don’t think so. Let me explain why.

Imagine being given a problem set with 30 questions, and 3 hours to work on it. That gives you in average 6 minutes for each problem. Now, for the sake of argument, let’s say that 3 of these problems you find hard, as you haven’t found a way to approach them after a couple of minutes, and so following the advice about moving on, you do just that. You skip these problems, with the intent of returning to them after having solved all the easier problems. With only ten minutes left you have now solved the other 27 problems, and return to one of the harder ones. No luck, and so you turn in your problem set with 27 problems solved. Pretty good. The next week you get your problem set back, with 24 correct answers, and a good grade to go with it. This is a fairly typical scenario, but what if you had done the opposite?

What if you had skipped the easy problems, as you – after looking at them briefly – felt confident you could solve them. This would give you, let’s say 2 1/2 hours to solve the 3 harder problems; the problems most teachers would advice you to skip, knowing that your grade correlates with how many problems you solve. With 2 1/2 hours left for just 3 problems, you’ve got 50 minutes for each of them. That’s almost 10 times as much as the 6 minute average. Now, let’s say you, after much effort and frustration, manage to solve 2 of these before time runs out. Next week, you get you problem set back. Both of these problems you solved correctly, but your grade is as low as they come.

Okay, now what’s the point of all this, you might wonder. Well, my claim is that if you had focused on the 3 harder problems you would have learned more. Yes, more. If I’m right about this, it’s a rather sad affair that the grading system motivates students to approach problem sets in a way that isn’t helping them learn as much as they could. If we could clone a math student and have him and his clone go through e.g. high school, with him skipping the hard problems on problem sets, and his clone skipping the easy problems, I’m convinced the clone would be much better prepared for a career as a research mathematician than his I-skip-the-hard-problems brother. If you doubt this, then consider the following.

You are given a crossword puzzle. Some of the words you can easily fill out. Others you have no idea about, so you leave them blank, and turn to the crossword puzzle on the next page of your crossword puzzle book. Your clone, sitting next to you with the same crossword puzzle book in hand doesn’t leave a single puzzle incomplete, but stays with it until he – with the help of a dictionary, perhaps – have solved it. By the time you are done with all 365 puzzles, your clone is still struggling with puzzle no. 97. Now, who do you think would score the highest on a vocabulary test, you or your clone? Yes, your clone, of cause. No doubt about it. Why? Because we don’t learn much by doing something we already know how to do. It holds for solving crossword puzzles and anything else, including mathematics. So the one who cares not about his grades, but about becoming a better mathematician, should not skip the hard problems. He should skip the easy ones.

But that’s not how it works, which is very unfortunate. Not only does the students not learn as much as they could from their problem sets, they also get in the habit to giving up on problems that don’t seem to yield a solution within some few minutes. But how many worthwhile (mathematical) problems give even a hint of a solution in a few minutes? Research mathematicians publish maybe a handful of papers a year, so hundreds of hours are spend on each paper. It’s a long and – according to some mathematicians – frustrating endeavor. But when it finally clicks, and the solution shows itself, it’s magic. Students who aspire to become research mathematicians should learn to handle this frustration, and to find joy in struggling with problems, not for a few minutes, but for long hours, if not days, weeks or months. Telling the students to skip the hard problems doesn’t teach them that. Instead, it teaches them to give up, and skip all the problems worth solving.

Numbers

Much of the early development of mathematics can be thought of as driven by the development of number systems applicable to increasingly sophisticated mathematical problems. – Keith Devlin (The Millennium Problems)

The numbers as we use for counting has a 7,000-10,000 year long history. These numbers we call the natural numbers, and they are 1, 2, 3, 4, 5, 6, 7… and so on.

Later, around 2,500-2,700 years ago, the numbers had been expanded to include also positive fractions, like 1/2, 3/4, 5/6 and so on. These numbers, as well as negative fractions, we today call the rational numbers.

A little later, at the time of Pythagoras, it was discovered that the rational numbers were not enough. According to legend a young Greek mathematician realized that the length of the hypotenuse in a right-angled triangle with base and altitude both 1, was equal to \sqrt{2} , and that this number could not be expressed as a fraction (a ratio between two whole numbers). Today we call these numbers for the real numbers.

Later mathematicians used the real numbers, but a full understanding of them didn’t arise until the end of the nineteenth century, when mathematicians also began to fully accept the negative numbers.

Around the same time, mathematicians also struggled with equations like x^2 + 1 = 0 . Obviously x^2 must be equal to -1, but no real number, positive or negative, would yield a negative number when squared. In 1770, Euler called such numbers imaginary numbers, in his book Algebra.

And shortly after a new breed of numbers arrived on the scene. The complex numbers, so named by Gauss. The first numbers not all laying on the one-dimensional number line. The complex numbers consists of two parts, a real and an imaginary part. The rear being just an real number, the imaginary being a real number multiplied by i , with i^2 = -1 .

Second degree polynomials – part two

In the previous post we saw how to solve second degree polynomials of a certain kind. On example of such a polynomial (as seen in the previous post) is this:

x^2 + 6x + 9

We can easily solve this equation because half of the second term constant (6 ) is equal to 3 , and 3^2 is equal to the third term (9 ). Further, the first term (x^2 ) doesn’t have a constant in front.

As long as these criteria are fulfilled, we have a polynomial we now know how to solve, using what we learned in the previous post.

Now, it’s time to look at cases where

c \ne ({b \over 2})^2 ,

with c refering to the third term, b to the second, and a to the first, in the standard form of the second degree polynomial (ax^2 + bx + c = 0 ). We will keep a = 1 for now, but look at what to do with a \ne 1 shortly. When we are done with that, we’ll be able to solve any second degree polynomial. But one step at the time.

First step on the way to solving second degree polynomials

First a quick reminder of the basics.

If x^2 = a
then x = \pm \sqrt{a}

So, if x^2 = 9
then x = \pm \sqrt{9} = \pm 3

Knowing this, let’s move on, building on what we know.

If (x + a)^2 = b
then x + a = \pm \sqrt{b}

So, if (x + 3)^2 = 25
then x + 3 = \pm \sqrt{25} = \pm 5

This should be obvious. And it naturally follows that

x_1 = (+)5 - 3 = 2 and x_2 = (-)5 - 3 = -8

Or, more generally:

If (x + a)^2 = b
then x = \pm \sqrt{b} - a

Now, let’s take another look at the equation (x + a)^2 = b .

Just as x^2 = x \times x
so. (x + a)^2 = (x + a) \times (x + a)

Yes, (x + a)^2 looks much more complicated than x^2 at first sight. But, as we can see above, the way we handle it is (of cause) the same.

So far, so good. Now comes the tricky – and fun(!) – part, when we multiply the two contents of the parentheses. It’s done by multiplying each term in the first parenthesis with each term in the second parenthesis, one at the time. like this:

(x + a) \times (x + a) = x \times x + x \times a + a \times x + a \times a

This we can simplify to:

x^2 + 2ax + a^2

This is a second degree polynomial, which – as we’ll see shortly – turns out to be mighty useful, as it helps us solve some quadratic equations.

Let’s look at the equation we had above again, namely this one:

(x + 3)^2 = 25

Using what we just learned, looking only at the left side of the equation, we see that:

(x + 3)^2 = x^2 + (2 \times 3)x + 3^2 = x^2 + 6x + 9

So, our pretty, and simple, equation (x + 3)^2 equals the polynomial x^2 + 6x + 9 . This we found out by squaring (x + 3) .

As we already know how to solve an equation like (x + 3)^2 = 25 , then we can also solve equations like x^2 + 6x + 9 = 25 , since these two equations are equal.

We just have to do the opposite of what we just did, namely take a polynomial like x^2 + 6x + 9 and bring it into the simpler form (x + 3)^2 .

But how? Think about it a minute before you read on. Maybe you’ll discover it yourself. If not, don’t despair, you’ll get better every time you try, even if you don’t find the solution. Just keep trying. It will become easier over time.

The key is to be found in here (as seen above):

(x + a)^2 = x^2 + 2ax + a^2

Keeping this in mind while looking at x^2 + 6x + 9 , we can see that a must be 3 . Why? Simply ignore everything in the polynomial except the middle term (6x ), and the same term in the right side of the equation above (2ax ). We then have:

2ax = 6x

So, solving for a , we find:

a = {6x \over 2x} = 3

This, as we already know, is true. So, taking the middle term, the term with x (to the first power) and dividing it by 2 , we get our a in the equation (x + a)^2 . Always.

So far, so good. We can now solve certain second degree polynomials, but not all. In the example above the third term in the polynomial was 9 , which – lucky for us – turns out (by design, admittedly) to equal a^2, as our a was 3 . But what if that wasn’t the case?

Well, that’s what we’ll look at in the next post. But before you read that one, I suggest you play around with a bunch of equations on the form (x + a)^2 , and transform them to polynomials. Also, do the opposite, and turn a bunch of polynomials into the simpler form, (x + a)^2 .

Have fun! (How can you not? It’s mathematics!)