\documenttype{article}
\title{About the Gamma Function}
\subtitle{Notes for Honors Calculus II, \\
Originally Prepared in Spring 1995}
\macro{\^o}{\ovhat{o}}
\newcommand{\intp}{\int_0^{\infty}}
\newcommand{\intao}[1]{\int_{-\infty}^{#1}}
\newcommand{\limk}{\mbox{lim}_{k \rightarrow \infty}}
\newcommand{\limn}{\mbox{lim}_{n \rightarrow \infty}}
\newcommand{\Limn}{\underset{n \rightarrow \infty}{\mbox{lim}}}
\newcommand{\limt}{\mbox{lim}_{t \rightarrow 0}}
\newcommand{\lim}{\mbox{lim}}
\newcommand{\log}{\func{log}}
\newcommand{\sumkn}{\sum_{k=1}^n}
\newcommand{\stdsum}[1]{\sum_{#1=1}^{\infty}}
\newcommand{\stdprod}[1]{\prod_{#1=1}^{\infty}}
\newcommand{\npfrac}[2]{\bal{1 + \frac{#1}{#2}}}
\newcommand{\bibitem}[2][]{\bibentry\bibhead[#1]{#2}}
\begin{document}
\section{Basic Facts about the Gamma Function}
The Gamma function is defined by the improper integral
\[ \Gamma(x) = \intp t^x e^{-t} \frac{dt}{t} \int: \ . \]
The integral is absolutely convergent for $ x \geq 1 $ since
\[ t^{x-1} e^{-t} \leq e^{-t/2} \ , \quad t \gg 1 \ \]
and $\intp e^{-t/2} dt \int: $ is convergent. The preceding
inequality is valid, in fact, for all $x$. But for $ x < 1 $
the integrand becomes infinitely large as $t$ approaches $0$ through
positive values. Nonetheless, the limit
\[ \lim_{r \rightarrow 0+} \int_r^1 t^{x-1} e^{-t} dt \int: \]
exists for $ x > 0 $ since
\[ t^{x-1} e^{-t} \leq t^{x-1} \]
for $t > 0$, and, therefore, the limiting value of the preceding integral
is no larger than that of
\[ \lim_{r \rightarrow 0+} \int_r^1 t^{x-1} dt \int: = \frac{1}{x} \ . \]
Hence, $\Gamma(x)$ is defined by the
first formula above for all values $ x > 0 $.
If one integrates by parts the integral
\[ \Gamma(x + 1) = \intp t^x e^{-t} dt \int: \ , \]
writing
\[ \intp udv \int: = u(\infty)v(\infty) - u(0)v(0) - \intp vdu \int: \ ,\]
with $dv = e^{-t}dt$ and $u = t^{x}$, one obtains the \emph{functional
equation}
\[ \Gamma(x+1) = x \Gamma(x) \ , \ \ x > 0 \ . \]
Obviously, $\Gamma(1) = \intp e^{-t} dt \int: = 1 $, and, therefore,
$\Gamma(2) = 1 \cdot \Gamma(1) = 1$, \
$\Gamma(3) = 2 \cdot \Gamma(2) = 2!$, \
$\Gamma(4) = 3 \Gamma(3) = 3!$, \ldots, and, finally,
\[ \Gamma(n+1) = n! \]
for each integer $n > 0$.
Thus, the gamma function provides a way of giving a meaning to the
``factorial'' of any positive real number.
Another reason for interest in the gamma function is its relation
to integrals that arise in the study of probability. The graph of the
function $\varphi$ defined by
\[ \varphi(x) = e^{-x^2} \]
is the famous ``bell-shaped curve'' of probability theory. It can be
shown that the anti-derivatives of $\varphi$ are not expressible in
terms of elementary functions. On the other hand,
\[ \Phi(x) = \intao{x} \varphi(t) dt \int: \]
is, by the fundamental theorem of calculus, an anti-derivative of
$\varphi$, and information about its values is useful.
One finds that
\[ \Phi(\infty) = \intao{\infty} e^{-t^2} dt \int: = \Gamma(1/2) \]
by observing that
\[ \intao{\infty} e^{-t^2} dt \int: = 2 \cdot \intp e^{-t^2} dt \int: \ , \]
and that upon making the substitution $t=u^{1/2}$ in the latter
integral, one obtains $\Gamma(1/2)$.
To have some idea of the size of $\Gamma(1/2)$, it will be useful
to consider the qualitative nature of the graph of $\Gamma(x)$.
For that one wants to know the derivative of $\Gamma$.
By definition $\Gamma(x)$ is an integral (a definite integral with
respect to the dummy variable \ $t$) of a function of $x$ and
$t$. Intuition suggests that one ought to be able to find the
derivative of $\Gamma(x)$ by taking the integral (with respect to $t$)
of the derivative with respect to $x$ of the integrand.
Unfortunately, there are examples where this fails to be correct; on
the other hand, it is correct in most situations where one is inclined
to do it. The methods required to justify ``differentiation under the
integral sign'' will be regarded as slightly beyond the scope of this
course. A similar stance will be adopted also for differentiation of
the sum of a convergent infinite series.
Since
\[ \frac{d}{dx} t^x = t^x(\log t) \ , \]
one finds
\[ \frac{d}{dx} \Gamma(x) = \intp t^x (\log t) e^{-t} \frac{dt}{t} \int:\ , \]
and, differentiating again,
\[ \frac{d^2}{dx^2} \Gamma(x) =
\intp t^x (\log t)^2 e^{-t} \frac{dt}{t} \int: \ . \]
One observes that in the integrals for both $\Gamma$ and the second
derivative $\Gamma''$ the integrand is always positive. Consequently,
one has $\Gamma(x) > 0$ and $\Gamma''(x) > 0$ for all $x > 0$. This
means that the derivative $\Gamma'$ of $\Gamma$ is a strictly increasing
function; one would like to know where it becomes positive.
If one differentiates the functional equation
\[ \Gamma(x+1) = x \Gamma(x) \ , \ \ x > 0 \ , \]
one finds
\[ \psi(x+1) = \frac{1}{x} + \psi(x) \ , \ \ x > 0 \ , \]
where
\[ \psi(x) = \frac{d}{dx} \log\Gamma(x) = \frac{\Gamma'(x)}{\Gamma(x)} \ , \]
and, consequently,
\[ \psi(n+1) = \psi(1) + \sum_{k=0}^{n} \frac{1}{k} \sum:\ . \]
Since the harmonic series diverges, its partial sum in the foregoing
line approaches $\infty$ as $x \rightarrow \infty$. Inasmuch as
$\Gamma'(x) = \psi(x)\Gamma(x)$, it is clear that $\Gamma'$ approaches
$\infty$ as $x \rightarrow \infty$ since $\Gamma'$ is steadily increasing
and its integer values $(n-1)!\psi(n)$ approach $\infty$.
Because $2 = \Gamma(3) > 1 = \Gamma(2)$,
it follows that $\Gamma'$ cannot be negative everywhere in the interval
$2 \leq x \leq 3$, and, therefore, since $\Gamma'$ is increasing, $\Gamma'$
must be always positive for $x \geq 3$. As a result, $\Gamma$ must be
increasing for $x \geq 3$, and, since $\Gamma(n + 1) = n!$, one sees
that $\Gamma(x)$ approaches $\infty$ as $x \rightarrow \infty$.
It is also the case that $\Gamma(x)$ approaches $\infty$ as
$x \rightarrow 0$. To see the convergence one observes that the
integral from
$0$ to $\infty$ defining $\Gamma(x)$ is greater than the integral from
$0$ to $1$ of the same integrand. Since $e^{-t} \geq 1/e$ for
$0 \leq t \leq 1$, one has
\[\Gamma(x)>\int_0^1 (1/e)t^{x-1}dt\int: =
(1/e)\balsb{\frac{t^x}{x}}_{t=0}^{t=1}=\frac{1}{ex}\ .\]
It then follows from the mean value theorem combined with the fact that
$\Gamma'$ always increases that $\Gamma'(x)$ approaches $-\infty$
as $x \rightarrow 0$.
Hence, there is a unique number $c > 0$ for which $\Gamma'(c) = 0$,
and $\Gamma$ decreases steadily from $\infty$ to the minimum value
$\Gamma(c)$ as $x$ varies from $0$ to $c$ and then increases to
$\infty$ as $x$ varies from $c$ to $\infty$. Since
$\Gamma(1) = 1 = \Gamma(2)$, the number $c$ must lie in the interval
from $1$ to $2$ and the minimum value $\Gamma(c)$ must be less than $1$.
\begin{display}
\includegraphics[:scale="0.3" description="Image: graph of Gamma"
]{grmplgamma}\\
Figure~1: Graph of the Gamma Function
\end{display}
Thus, the graph of $\Gamma$ (see Figure~1) is concave upward
and lies entirely in the first quadrant of the plane. It has the
$y$-axis as a vertical asymptote. It falls steadily for $0 < x < c$
to a postive minimum value $\Gamma(c) < 1$. For $x > c$ the graph
rises rapidly.
\section{Product Formulas}
It will be recalled, as one may show using l'H\^opital's Rule,
that
\[ e^{-t} = \Limn \bal{1-\frac{t}{n}}^n \ . \]
From the original formula for $\Gamma(x)$, using an interchange of limits
that in a more careful exposition would receive further comment, one has
\[ \Gamma(x) = \Limn \Gamma(x,n) \ , \]
where $\Gamma(x,n)$ is defined by
\[\Gamma(x,n)=\int_0^n t^{x-1}\bal{1-\frac{t}{n}}^n dt\int:\ ,\ n\geq 1\ .\]
The substitution in which \ $t$ \ is replaced by \ $nt$ \ leads to
the formula
\[ \Gamma(x,n) = n^x \int_0^1 t^{x-1} (1 - t)^n dt\int: \ . \]
This integral for $\Gamma(x,n)$ is amenable to integration by parts.
One finds thereby:
\[ \Gamma(x,n)=\frac{1}{x}\bal{\frac{n}{n-1}}^{x+1}\Gamma(x+1,n-1)\ ,n \geq 2 \ . \]
For the smallest value of $n$, \ $n = 1$ \ , integration by parts yields:
\[ \Gamma(x,1) = \frac{1}{x(x+1)} \ . \]
Iterating $n-1$ times, one obtains:
\[ \Gamma(x,n) = n^x \frac{n!}{x(x+1)(x+2)\cdots(x+n)} \ , \ n \geq 1 \ . \]
Thus, one arrives at the formula
\[ \Gamma(x) = \Limn n^x \frac{n!}{x(x+1)(x+2)\cdots(x+n)} \ . \]
This last formula is not exactly in the form of an infinite product
\[ \stdprod{k} p_k \prod: = \Limn\aF;{\prod_{k=1}^n p_k\prod:} \ . \]
But a simple trick enables one to maneuver it into such an infinite
product. One writes \ $n$ \ as a ``collapsing product'':
\[n+1=\frac{n+1}{n}\cdot\frac{n}{n-1}\cdot
\cdots \cdot\frac{3}{2}\cdot\frac{2}{1} \]
or
\[ n+1 = \prod_{k=1}^n \npfrac{1}{k}\prod: \ \cma \]
and, taking the $x$th power, one has
\[ (n+1)^x = \prod_{k=1}^n \npfrac{1}{k}^x\prod: \ \eos \]
Since
\[ \limn\aF;\frac{n^x}{(n+1)^x} = 1 \ \cma \]
one may replace the factor $n^x$ by $(n+1)^x$ in the last expression above
for $\Gamma(x)$ to obtain
\[ \Gamma(x) = \frac{1}{x}\itimes\limn\aF;{\prod_{k=1}^n
\frac{\npfrac{1}{k}^x}{\npfrac{x}{k}}\prod:} \ , \]
or
\[\Gamma(x)=\frac{1}{x}\itimes;{\stdprod{k}
\frac{\npfrac{1}{k}^x}{\npfrac{x}{k}}\prod:}\ .\]
The convergence of this infinite product for $\Gamma(x)$ when $x > 0$
is a consequence, through the various maneuvers performed, of the
convergence of the original improper integral defining $\Gamma(x)$ for
$x > 0$.
It is now possible to represent $\log\Gamma(x)$ as the sum of an infinite
series by taking the logarithm of the infinite product formula. But first
it must be noted that
\[ \frac{(1+t)^r}{1 + rt} > 0 \ \ \mbox{for} \ t > 0 \ , \ \ r > 0 \ . \]
Hence, the logarithm of each term in the preceding infinite product is
defined when $x > 0$.
Taking the logarithm of the infinite product one finds:
\[ \log \Gamma(x) = - \log x + \stdsum{k} u_k(x)\sum: \ , \]
where
\[ u_k(x) = x\log\npfrac{1}{k}-\log\npfrac{x}{k} \ . \]
It is, in fact, almost true that this series converges absolutely for
\emph{all} real values of $x$. The only problem with non-positive
values of $x$ lies in the fact that $\log(x)$ is meaningful only for
$x > 0$, \ and, therefore, $\log(1+x/k)$ is meaningful only for
$k > |x|$. For fixed $x$, if one excludes the finite set of terms
$u_k(x)$ for which $k \leq |x|$, \ then the remaining ``tail'' of
the series is meaningful and is absolutely convergent.
To see this one applies the ``ratio
comparison test'' which says that an infinite series converges absolutely
if the ratio of the absolute value of its general term to the general
term of a convergent positive series exists and is finite. For this
one may take as the ``test series'', the series
\[ \stdsum{k} \frac{1}{k^2}\sum: \ . \]
Now as \ $k$ \ approaches \ $\infty$, \ $t = 1/k$ \ approaches $0$,
and so
\begin{eqnarray}[:nonum="true"]
\limk\frac{u_k(x)}{1/k^2} & = & \limt\frac{x\log(1+t)-\log(1+xt)}{t^2} \\
~ & = & \limt\frac{\frac{x}{1+t}-\frac{x}{1+xt}}{2t} \\
~ & = & \limt\frac{x[(1+xt)-(1+t)]}{2t(1+t)(1+xt)} \\
~ & = & \frac{x(x-1)}{2} \ .
\end{eqnarray}
Hence, the limit of $|u_k(x)/k^{-2}|$ is $|x(x-1)/2|$, and the series
$\sum u_k(x)\sum:$ is absolutely convergent for all real $x$. The absolute
convergence of this series foreshadows the possibility of defining
$\Gamma(x)$ for all real values of $x$ other than non-positive integers.
This may be done, for example, by using the functional equation
\[ \Gamma(x+1) = x \Gamma(x) \]
or
\[ \Gamma(x) = \frac{1}{x} \Gamma(x + 1) \]
to define $\Gamma(x)$ for $-1 < x < 0$ and from there
to $-2 < x < -1$, etc.
Taking the derivative of the series for $\log\Gamma(x)$ term-by-term
-- once again a step that would receive justification in a more careful
treatment -- and recalling the previous notation $\psi(x)$ for the
derivative of $\log\Gamma(x)$, one obtains
\begin{eqnarray}[:nonum="true"]
\psi(x) + \frac{1}{x}
& = & \stdsum{k}\balbr{\log\npfrac{1}{k}
-\frac{\frac{1}{k}}{\npfrac{x}{k}}}\sum:\\
~ & = & \Limn\sumkn\balbr{\log\frac{k+1}{k}-\frac{1}{x+k}}\sum: \\
~ & = & \Limn\balbr{\log(n+1)-\sumkn\frac{1}{x+k}\sum:} \\
~ & = & \Limn\balbr{\log(n+1)-\sumkn\frac{1}{k}\sum:
+\sumkn\bal{\frac{1}{k}-\frac{1}{x+k}}\sum:} \\
~ & = & \Limn\balbr{\log(n+1)-\sumkn\frac{1}{k}\sum:
+x\sumkn\frac{1}{k(x+k)}\sum:} \\
~ & = & -\gamma + x\stdsum{k}\frac{1}{k(x+k)}\sum: \ ,
\end{eqnarray}
where $\gamma$ denotes Euler's constant
\[ \gamma = \Limn\bal{\sumkn\frac{1}{k}\sum: - \log n} \ . \]
When $x = 1$ one has
\[ \psi(1) = -1 - \gamma + \stdsum{k}\frac{1}{k(k+1)}\sum: \ , \]
and since
\[ \frac{1}{k(k+1)} = \frac{1}{k} - \frac{1}{k+1} \ , \]
this series collapses and, therefore, is easily seen to sum to $1$.
Hence,
\[ \psi(1) = - \gamma \ , \ \ \psi(2) = \psi(1) + 1/1 = 1 - \gamma \ . \]
Since \ $\Gamma'(x) = \psi(x)\Gamma(x)$, \ one finds:
\[ \Gamma'(1) = - \gamma \ , \]
and
\[ \Gamma'(2) = 1 - \gamma \ . \]
\hrule
These course notes were prepared while consulting standard references
in the subject, which included those that follow.
\begin{thebibliography}
\bibitem{courant} R.~Courant, \slnt{Differential and Integral
Calculus} (2 volumes), English translation by E.~J. McShane,
Interscience Publishers, New York, 1961.
\bibitem{whittakerWatson} E.~T. Whittaker \& G.~N. Watson,
\slnt{A~Course of Modern Analysis}, 4th edition, Cambridge University
Press, 1969.
\bibitem{widder} David Widder, \slnt{Advanced Calculus}, 2nd edition,
Prentice Hall, 1961.
\end{thebibliography}
\end{document}