Computing sqrt(2) by hand
(This is Yan Sheng.)
Today's blog post will be a change of pace from the usual contest math content: one day I was wondering to myself how many digits of I could calculate by hand (who hasn't?). Math is more fun when you try stuff out for yourself, so here's some warm-up:
Exercise: Come up with a method of calculating the decimal expansion of by hand. Then spend 5 or 10 minutes (or until you get bored) actually executing it. Can you get the right answer to 6 digits? 8? 10? How long would it take (and how painful would it be) to get 20 correct digits using your method?
Note: throughout this post, when I refer to "decimal places" or "digits" I always mean after the decimal point, eg. is an approximation with 1 correct digit.
History of computation records
Here's a (probably incomplete) table of past records for number of digits of calculated, partly to add some space to avoid spoilers for those who are trying the above exercise, and partly because it turned out to be surprisingly hard to compile such a list.
Source | Year | Digits |
---|---|---|
YBC 7289 | 1800-1600 BCE | 5 |
Baudhayana | 800-500 BCE | 5 |
Zhu Zaiyu | 1584 | 24 |
J. M. Boorman | 1887 | 315 (486 claimed) |
R. Coustal | 1950 | 1,032 |
H. S. Uhler | 1950 | 1,544 |
K. Takahashi, M. Sibuya | 1966 | 14,000 |
M. Lal | 1967 | 19,600 |
M. Lal | 1968 | 39,000 |
M. Lal, W. F. Lunnon | 1968 | 100,000 |
J. Dutka | 1971 | 1,000,000 |
R. Nemiroff, J. Bonnell | 1994 | 10,000,000 |
Y. Kanada, D. Takahashi | 1997 | 137,438,953,444 |
S. Kondo | 2010 | 1,000,000,000,000 |
A. Yee | 2012 | 2,000,000,000,050 |
R. Watkins | 2016 | 2,000,000,000,100 |
R. Watkins | 2016 | 5,000,000,000,000 |
R. Watkins | 2016 | 10,000,000,000,000 |
T. Hanselmann | 2022 | 10,000,000,001,000 |
J. Ranous | 2023 | 20,000,000,000,000 (unverified) |
Some notes on the above table:
- Someone on Hacker News claims that Riemann computed 38 digits of by hand, but I couldn't find any other source which corroborates this. If anyone wants to visit the Göttingen State and University Library to scan the relevant pages of Riemann's notes, please let me know.
- The introduction to the Uhler paper is pretty funny:
The present investigation was begun with the single intention of testing a value for as computed by J. Marcus Boorman about 65 years ago. This datum was chosen because the author had acquired the impression that Boorman seldom if ever produced a reliable result in his calculations and it seemed fair to check this particular approximation since it was claimed that he had verified many of his digits.
Spoiler alert: not all of Boorman's digits were correct.
- Breaking news: the latest record was just announced yesterday (Nov 17, 2023)!
Back to calculation by hand: here are some of the things I tried, see how you compare!
Digit-by-digit
Let's start with some obvious facts: we haveso , ie. the digit of before the decimal point is .
How should we find the next digit? Write , so we want to solve the equationwhich after expanding and clearing denominators givesNow we check that , so , and thus we have .
We can repeat this process for the next digit: writing , we want to solveorCheck that , so , and we have .
One more time, just to be sure: writing , we want to solveorCheck that , so , and we have .
Okay, so we can continue this process and just calculate the digits one by one. We don't want to write down a quadratic equation for every digit, so conventionally the above procedure is written in a way similar to long division, as follows:
Exercise: Figure out how the procedure we were performing with quadratic equations is equivalent to this "long division" method.
Exercise: Use this method to compute the square root of your favourite number between 1 and 100 (which is not a square) to 4 decimal places.
Of course, in principle this method gives us as many digits as we want, but in practice I get bored by the time the calculation gets as far as shown above. This suggests a rating system:
Digits Until I Got Bored: 5
Time to move on and brainstorm some other methods.Heron's method
Suppose we have some approximation . If we had a procedure that uses to somehow generate a better approximation , then by repeating same procedure we could get an even better approximation , then and so on, giving us a sequence of numbers that converges to . Now the problem of calculating to a given precision reduces to calculating a good enough approximation .
Okay, so how do we generate a better approximation from a given approximation ? Actually, how do we generate any other approximation from ? We probably need to think about some properties that characterise .
For example, we might consider the equationThis suggests that if is a good approximation to , then so should . In fact, if we write , then we can expand the fraction asThis says that is nearly as good of an approximation as , but in the opposite direction: the error changes sign. Hence the average should be an even better approximation of , with error on the order of instead of .
Thus we expect that the sequence defined byshould converge to , provided that the initial approximation is good enough. The convergence turns out to be pretty fast, since the error is roughly squared with each iteration (this is known as quadratic convergence); equivalently, the number of correct digits doubles at every step!
Exercise: Show that in fact as long as . What happens if ?
Exercise: Show that we also get the above recurrence by applying Newton's method to the function .
The above procedure is known as Heron's method (yes, same guy as Heron's formula), or the Babylonian method (even though we don't know if the Babylonians knew about it).
Let's see this in action. We'll start with , and calculate Here gives 5 correct digits of .
At this point I realise a key flaw of this method: I don't actually want to divide large integers by hand, and the next term in this sequence () already looks pretty bad to me, even though it probably has at least 10 correct digits. So maybe this is the end of the road, and thus
Digits Until I Got Bored: 5…
…unless?Let's give the method one more shot, choosing a different starting value . Thenwhich is alright since dividing by 7 isn't too bad. Now the miracle is that dividing by 99 is also not too bad, so the next iteration is actually really easy: This approximation has 8 correct digits. But now our luck runs out, because , and I'm not really up to dividing by 19801. Final verdict:
Digits Until I Got Bored: 8
Exercise: Could we have come up with a recurrence without using division by ? Consider the identityHence if , then as well. How good is the latter approximation compared to the former? (Note that here the two errors have the same sign.) Hence show that the sequenceconverges quadratically to , if is close enough to . Which function is this the Newton's method iteration for?
Exercise: We can simplify the above recurrence even further for hand calculations. Let ; show thatWriting , we have , and . Hence deduce the relation between the product and . If , express as a function of . Now actually try to approximate by calculating a few steps of this recurrence. (This method is known as Goldschmidt's algorithm.)
Rational approximations
If we want a good rational approximation , one possible requirement is for to be small. How do we generate numbers of the form (with nonzero integers) which are close to 0? Here's an idea: since , we could consider its powerswhich converge to 0 exponentially fast. Then is a sequence of rational approximations converging to . The terms and can be computed with the recurrence ie. The first few values are given as follows:
How good is this approximation, ie. what is the error like? Note that the above recurrence remains unchanged when we replace with , so that we also haveBut , so which is close to for large .
Hey, here's an idea: why calculate all the powers of , when I could get to higher powers with less calculation by repeated squaring? Let's try it out: Hang on, don't these numbers look kinda familiar?
Exercise: Show that squaring is equivalent to performing one iteration of Heron's method on .
There are several other approaches to getting good rational approximations of , which I'll leave as exercises:
Exercise: Notice thatShow that the sequenceconverges to . Simplify each of these continued fractions; what do you see?
Exercise: Another possible indication that a rational approximation is good is when is small. Clearly cannot be 0, unless . Find some small integer solutions to ; what do you see? Guess what the complete set of solutions to are, then prove your conjecture. (In general, the Pell's equation is closely related to the continued fraction expansion of .)
Power series
Maybe we should generalise the problem, from computing to eg. , where is small enough so that 1 is a good initial guess. What's a good first order correction to the initial guess? If then we wantso when is small our best choice is .
What's the second order correction? If , then we want so we should choose .
Of course, we could continue this procedure for higher order corrections, and obtain a power series expansionby equating coefficients on both sides ofand solving for the recursively.
Fortunately, we don't actually have to do that calculation ourselves, because Newton's binomial series does the hard work for us: for all and real , we have
Exercise: Derive the above series by computing the -th derivative of at , and using Taylor's theorem.
Substituting , we getIt is tempting at this point to just substitute to get , but that's a bad idea: the series has radius of convergence of 1, so even though the series does converge at (because it's alternating), it does so very slowly:
Exercise: Show that the -th coefficient of the above power series, namelysatisfies(Hence we need roughly terms to approximate to decimal places by using the series with .)
A much better alternative is to substitute to getso that the series converges geometrically, although it still takes roughly terms to get decimal places of , and I don't think I'd be too happy to compute the decimal expansion of terms like .
Can we do better? Instead of using the series to calculate directly, we would really want to calculate the square root of a number closer to 1, so that the series converges faster. Here's where the rational approximations of that we found earlier can help us: for instance, since , we could instead calculateby substituting into the series, and then multiply the sum by . Or we could calculateby substituting into the series, and then multiply the sum by . In either case, we now only need roughly terms to get decimal places of , which is still unpleasant, but at least it's 3 times less unpleasant than before.
So now we go through each rational approximation of from before to write down similar identities, and see if any catches our eye. Eventually we find a miracle:so we should substitute into the series. This is very good for us, since it is very easy to divide by powers of 50.
Let's actually do this. We'll rewrite the series a little, substituting to getWe now want to take . It's easy to get each term in the series from the previous term by multiplying by , so we compute: That's 15 digits of while barely breaking a sweat. Wow!
Digits Until I Got Bored: 15++
Exercise: If you're thinking to yourself, "hmm, I bet Euler could do something like that," you'd be correct! In fact, in his 1755 textbook on differential calculus, he did it even better, and avoids the division by 7 that we had to do at the end of our calculation. By using the binomial series with and the identitycompute to 12 (or 15, or 20) decimal places.
Exercise: Instead of using the binomial series, we could also custom-design our own power series expansion. For example, using the approximation , we can writeFind the first few terms of the power series whose square isBy substituting the appropriate value for , compute to 12 (or 15, or 20) decimal places. (If you don't know the nice trick for dividing by 99, you'll figure it out by the end of this exercise. Hint: group the digits into pairs.)
More fun things to try
- Explain the pattern in the decimal expansion ofHence compute to 12 (or 15, or 20) decimal places.
- (MIT Integration Bee 2023 Finals) Evaluate(Time limit: 4 mins)
- Generalise the methods here to computing arbitrary square roots .
- In particular, there doesn't seem to be a great power series for , in the same way that we had miracles for and . This comes from the fact that the expressions and can take surprisingly small values for positive integers. What is the smallest possible value of ? The second smallest?
- Of course, "Digits Until I Got Bored" isn't a very scientific metric. Come up with a more objective measurement for the efficiency the different methods we've seen (ideas: number of 1-digit operations required for decimal places, or maybe just multiplications).
- (hard) Generalise to a different activity that you enjoy: come up with a metric that quantifies the amount of fun you'll have, and optimise for that metric. Was that a good idea?
- Suppose you're a handheld calculator (ie. suppose you're a programmer at Hewlett-Packard in the 1970s). What method would you implement for the square root function? Compare your proposal with CORDIC, the actual algorithm used for the first handheld calculators; why did HP use CORDIC instead of your proposal?
- Suppose you're feeling very inspired by this blog post, and you want to start a project of computing by hand to 50 or 100 decimal places. Which method would you use? How would you check for mistakes in your intermediate calculations?
Comments
Post a Comment