Continued Fractions and the Euclidean Algorithm
William F. Hammond
Lecture notes prepared for MATH 326, Spring 1997
Department of Mathematics and Statistics
University at Albany
Z
GL
Continued fractions offer a means of concrete representation for
arbitrary real numbers The continued fraction expansion of a real
number is an alternative to the representation of such a number as
a (possibly infinite) decimal
The reasons for including this topic in the course on Classical Algebra
are:
The continued fraction expansion of a real number
Every real number x is represented by a point on the real line
and, as such, falls between two integers For example, if n is
an integer and
n x n 1 ,
x falls between n and n1 and there is one and only one
such integer n for any given real x In the case where x
itself is an integer, one has n x The integer n is
sometimes called the floor of x and one often introduces
a notation for the floor of x such as
n [x]
Examples:
For any real x with $n[x]$ the number
$uxn$ falls in the unit interval
I consisting of all real numbers u for which 0 u 1
Thus, for given real x there is a unique decomposition
x n u
where n is an integer and u is in the unit interval
Moreover, u 0 if and only if x is an integer This
decomposition is sometimes called the mod one decomposition
of a real number It is the first step in the process
of expanding x as a continued fraction
The process of finding the continued fraction expansion of a real
number is a recursive process that procedes one step at a time
Given x one begins with the mod one decomposition
x n_{1} u_{1} ,
where n_{1} is an integer and 0 u_{1} 1
If u_{1} 0 which happens if and only if x is an integer,
the recursive process terminates with this first step The idea
is to obtain a sequence of integers that give a precise determination
of x
If u_{1} 0 then the reciprocal $1u$_{1} of u_{1} satisfies
$1u$_{1} 1 since u_{1} is in I and, therefore, u_{1} 1
In this case the second step in the recursive determination of the
continued fraction expansion of x is to apply the mod one
decomposition to 1u_{1} One writes
1u_{1} n_{2} u_{2} ,
where n_{2} is an integer and 0 u_{2} 1 Combining the
equations that represent the first two steps, one may write
x n_{1} 1n_{2} u_{2}
Either u_{2} 0 in which case the process ends with the
expansion
x n_{1} 1n_{2} ,
or u_{2} 0 In the latter case one does to u_{2} what had just
been done to u_{1} above under the assumption u_{1} 0
One writes
1u_{2} n_{3} u_{3} ,
where n_{3} is an integer and 0 u_{3} 1 Then combining
the equations that represent the first three steps, one may write
x n_{1} 1n_{2} 1n_{3} u_{3}
After k steps, if the process has gone that far, one has integers
$n$_{1}, n_{2}, , n_{k} and real numbers
$u$_{1}, u_{2}, , u_{k} that are members of the unit interval
I with $u$_{1}, u_{2}, , u_{k1} all positive
One may write
x n_{1} 1n_{2} 1n_{3}
1 1n_{k} u_{k}
Alternatively, one may write
x [ n_{1}, n_{2}, n_{3}, , n_{k} u_{k} ]
If u_{k} 0 the process ends after k steps Otherwise,
the process continues at least one more step with
1u_{k} n_{k1} u_{k1}
In this way one associates with any real number x a sequence,
which could be either finite or infinite, $n$_{1}, n_{2},
of integers This sequence is called the continued fraction
expansion of x
conventionConvention
When $[n$_{1}, n_{2}, ...] is called a
continued fraction, it is understood that all of the numbers
n_{j} are integers and that $n$_{j} 1 for j 2
1511 1 411
1 1114
1 12 34
1 12 143
1 12 11 13
[1, 2, 1, 3]
10 3 11103
3 1103
3 16 11103
3 16 1103
3 16 16 1
[3, 6, 6, 6, ]
[2, 3, 5, 2 ] 2 1[3, 5, 2]
2 13 1[5, 2]
2 13 15 12
2 13 1112
2 13 211
2 13511
2 1135
8135
Let
x 11213121312
In this case one finds that
x 1 1y ,
where
y 213121312
Further reflection shows that the continued fraction structure for
y is selfsimilar:
y 2131y
This simplifies to
y 7y23y1
and leads to the quadratic equation
3y^{2} 6y 2 0
with discriminant 60 Since y 2 one of the two
roots of the quadratic equation cannot be y and, therefore,
y 3 153
Finally,
x 1512
The idea of the calculation above leads to the conclusion that any
continued fraction $[n$_{1}, n_{2}, ] that eventually repeats
is the solution of a quadratic equation with positive discriminant
and integer coefficients The converse of this statement is also
true, but a proof requires further consideration
The case of a rational number
The process of finding the continued fraction expansion of a rational
number is essentially identical to the process of applying the
Euclidean algorithm to the pair of integers given by its numerator and
denominator
Let x ab, b 0 be a representation of a rational number
x as a quotient of integers a and b The mod one decomposition
ab n_{1} u_{1} , u_{1} a n_{1} bb
shows that u_{1} r_{1}b where r_{1} is the remainder for
division of a by b The case where $u$_{1} 0 is the case
where x is an integer Otherwise u_{1} 0 and the mod one
decomposition of $1u$_{1} gives
br_{1} n_{2} u_{2} , u_{2} b n_{2} r_{1}r_{1}
This shows that u_{2} r_{2}r_{1} where r_{2} is the remainder for
division of b by r_{1} Thus, the successive quotients in Euclids
algorithm are the integers $n$_{1}, n_{2}, occurring in the
continued fraction Euclids algorithm terminates after a finite
number of steps with the appearance of a zero remainder Hence, the
continued fraction expansion of every rational number is finite
theoremsTheorem The continued fraction expansion of a real number is
finite if and only if the real number is rational
Proof. It has just been shown that if x is rational, then
the continued fraction expansion of x is finite because its calculation
is given by application of the Euclidean algorithm to the numerator
and denominator of x The converse statement is the statement
that every finite continued fraction represents a rational number
That statement will be demonstrated in the following section
The symbol [t_{1}, t_{2}, , t_{r}]
For arbitrary real numbers $t$_{1}, t_{2}, , t_{r}
with each $t$_{j} 1 for $j2$
the symbol $[\; t$_{1}, t_{2}, , t_{r} ] is defined recursively
by:
[ t_{1} ] t_{1}
treceqn
[t_{1},t_{2},,t_{r} ]t_{1}1[t_{2},,t_{r} ]
In order for this definition to make sense one needs to know that
the denominator in the righthand side of (iref="trec"[trec]) is nonzero
The condition $t$_{j} 1 for $j2$ guarantees, in
fact, that [t_{2},,t_{r} ] 0 as one may prove using
induction
It is an easy consequence of mathematical induction that the symbol
$[t$_{1}, t_{2}, , t_{r}] is a rational number if each t_{j} is
rational In particular, each finite continued fraction is a
rational number (Note that the symbol $[t$_{1}, t_{2}, , t_{r}]
is to be called a continued fraction, according to the convention of
the first section, only when each t_{j} is an integer.)
Observe that the recursive nature of the symbol $[t$_{1}, , t_{r}]
suggests that the symbol should be computed in a particular case working
from right to left Consider again, for example, the computation above
showing that [2, 3, 5, 2] 8135 Working from right to left one
has:
[2] 2
[5, 2] 51[2] 512 112
[3, 5, 2] 31[5, 2] 3211
3511
[2, 3, 5, 2] 21[3, 5, 2] 21135
8135
There is, however, another approach to computing
[t_{1}, t_{2}, , t_{r}]
Let, in fact, $t$_{1}, t_{2}, be any (finite or infinite)
sequence of real numbers One uses the double recursion
pdefeqn
p_{j} t_{j} p_{j1} p_{j2} , j 1 ,
p_{0} 1 , p_{1} 0
to define the sequence p_{j} , j 1 The double
recursion, differently initialized,
qdefeqn
q_{j} t_{j} q_{j1} q_{j2} , j 1 ,
q_{0} 0 , q_{1} 1
defines the sequence q_{j} , j 1
Note that p_{1} t_{1} p_{2} t_{1}t_{2} 1
and q_{1} 1 q_{2} t_{2} q_{3} t_{2}t_{3} 1
One now forms the matrix
Mrdefeqn
M_{j} rr p_{j} q_{j} p_{j1} q_{j1} for j 0
Thus, for example,
M_{0} rr 1 0 0 1 , and
M_{1} rr t_{1} 1 1 0
It is easy to see that the matrices M_{j} satisfy the double recursion
Mrreceqn
M_{j} rr t_{j} 1 1 0 M_{j1} , j 1
as a consequence of the double recursion formulas for the p_{j} and q_{j}
Hence, a simple argument by mathematical induction shows that
Mrcompeqn
M_{r} rr t_{r} 1 1 0 rr t_{2} 1 1 0
rr t_{1} 1 1 0 , r 1
This is summarized by:
propositionsProposition For any sequence $t$_{j}, j 1 of
real numbers, if $p$_{j} and $q$_{j} are the sequences
defined by the double recursions (iref="pdef"[pdef]) and (iref="qdef"[qdef]), then one
has the matrix identity
prqrroweqn
rr p_{r} q_{r} p_{r1} q_{r1}
rr t_{r} 1 1 0 rr t_{2} 1 1 0
rr t_{1} 1 1 0
for each integer r 1
pqidencorollariesCorollary One has the identity
$p$_{r} q_{r1} q_{r} p_{r1} (1)^{r} for each
integer r 1
Proof. The number $p$_{r} q_{r1} q_{r} p_{r1} is the
determinant of the matrix M_{r} From the formula (iref="Mrcomp"[Mrcomp])
the matrix $M$_{r} is the product of r matrix factors, each of
which has determinant 1 Since the determinant of the product
of matrices is the product of the determinants of the factors, it is
clear that det(M_{r}) (1)^{r}
videncorollariesCorollary One has the vector identity
prqrveceqn
r p_{r} q_{r}
rr t_{1} 1 1 0 rr t_{2} 1 1 0
rr t_{r} 1 1 0 r 1 0
for each integer r 1
Proof. First recall (i) that the product of a matrix and a (column)
vector is defined by the relation
rr a b c d r x y r ax by cx dy ,
(ii) that the transpose of a matrix is the matrix whose rows
are the columns of the given matrix, and (iii) that the transpose
operation reverses matrix multiplication One tranposes both sides of
the relation (iref="prqrrow"[prqrrow]) to obtain:
prqrcoleqn
rr p_{r} p_{r1} q_{r} q_{r1}
rr t_{1} 1 1 0 rr t_{2} 1 1 0
rr t_{r} 1 1 0
To this relation one applies the principle that the first column of
any 2 2 matrix is the product of that matrix with the
column
r 1 0
in order to obtain the column identity (iref="prqrvec"[prqrvec])
theoremsTheorem For any sequence $t$_{j}, j 1 of
real numbers, if $p$_{j} and $q$_{j} are the sequences
defined by the double recursions (iref="pdef"[pdef]) and (iref="qdef"[qdef]), and
if $t$_{j} 1 for j 2 then the value of the symbol
$[t$_{1}, , t_{r}] is given by the formula
tcompeqn
[t_{1}, t_{2}, , t_{r}] p_{r}q_{r}
for r 1
Proof. What is slightly strange about this important
result is that while the $p$_{r} and the $q$_{r} are
defined by the front end recursions, albeit double recursions,
(iref="pdef"[pdef]) and (iref="qdef"[qdef]), the symbol $[t$_{1}, , t_{r}]
is defined by the back end recursion (iref="trec"[trec]) The proof
begins with the comment that the righthand side of (iref="tcomp"[tcomp])
does not make sense unless one can be sure that the denominator
q_{r} 0 One can show easily by induction on r that
$q$_{r} 1 for $r1$ under the hypothesis $t$_{j} 1
for j 2
The proof proceeds by induction on r If r 1 the
assertion of the theorem is simply the statement t_{1} p_{1}q_{1}
and, as noted above, $p$_{1}t_{1} and q_{1}1 Assume now
that r 2 By induction we may assume the correctness of
the statement (iref="tcomp"[tcomp]) for symbols of length r1 and,
therefore, for the symbol [t_{2}, , t_{r}] That case of the
statement says that $[t$_{2}, , t_{r}] must be equal to ac
where by corollary iref="viden"[viden]
r a c rr a b c d r 1 0
with
rr a b c d rr t_{2} 1 1