On this regularly updated page, I will walk with you from the history to the mathematics of the Riemann zeta function, step by step and in simple terms.
From Finite Sums to Infinite Series
Let us start by summing all the numbers from 1 to 10, inclusive. We call this sum
We also represent it as
where the symbol , the capital Greek letter sigma, is a shorthand notation for a sum. We can clearly compute this sum, and it evaluates to 55. Since later we will deal with many terms, let us develop a simple technique. Because addition is a commutative operation, meaning
, we can rewrite the same sum as:
If we add these two expressions term by term, we can pair the terms according to their positions. Each pair adds up to 11, and we have 10 such pairs, so:
Hence, dividing by 2, we obtain . The pattern now becomes clear. If we summed the numbers from 1 up to any
, we would obtain
For instance, , and so on. Clearly, if we keep extending the sum all the way to infinity, the result will grow without bound, and summing all natural numbers would give infinity.
This clever summation trick is famously attributed to Carl Friedrich Gauss (1777–1855), who, according to a well-known anecdote, discovered it as a young schoolboy when asked to add the numbers from 1 to 100.
Now let us modify our sum in the following way. Instead of summing the numbers themselves, let us sum their corresponding reciprocal terms, and denote this by
For instance, the numerical value of is approximately 2.929, that is,
. Then let us extend the sum up to 100 terms. In that case we have
. What if we extend it up to 1000 terms, or even up to 10 000 terms? Here is the pattern:
It looks like we are converging to a number as we increase the number of terms being summed, doesn’t it? For instance, if we sum the first one million terms, we obtain about 14.397. It is obvious that each new term becomes smaller and smaller, but does the sum really converge?
Before we address this question, let us introduce an important mathematical operation called the logarithm.
Logarithm of a Number
If we multiply 2 by itself 5 times, we can write this in shorthand as
To recover the exponent 5 from the number 32, we use the inverse operation, which we denote by
This inverse operation is called a logarithm, and in this particular example it is the logarithm base 2. In general, an equation in the form is equivalent to
.
Long before logarithms were formally introduced, scholars in the medieval Arab world developed the algebraic foundations that later made such ideas possible. Mathematicians like al-Khwarizmi (c. 780–850) systematized algebra and applied it to practical fields like astronomy. Their work laid essential groundwork for later developments in Europe, including the invention of logarithms by John Napier in the early 17th century.
Now let us introduce the most important logarithm used in mathematics, called the natural logarithm. It is simply the logarithm whose base is the special number , which is called Euler’s number.
We denote it by So, for example,
and
The natural logarithm plays a central role in calculus, growth processes, and as we will soon see, in understanding the behavior of the above series.
The constantitself was first introduced by Jacob Bernoulli in 1683. However, it was Leonhard Euler (1707–1783) who later adopted the letter
for this constant, around 1727 or 1728. Euler is one of the most important intellectuals to emerge from bourgeois society. We will return to his contributions in much more detail later, especially when we discuss the Basel problem.
Now let us look at some numerical values of the natural logarithm.
If we compare these values with the earlier results for , a striking similarity appears. To make this clearer, let us look at the difference
for certain values of
. Here are the results:
The pattern is convincing: as goes to infinity, the quantity
appears to approach a constant value. This constant is called the Euler–Mascheroni constant, denoted by
. Since we cannot set
equal to infinity, we instead talk about approaching infinity. We express this behavior as
We will return to the meaning of this “limit” operation in more detail later.
It was Lorenzo Mascheroni who corrected Euler’s initial approximation ofand attempted to calculate the constant to 32 decimal places.
This shows that the infinite sum is essentially equal to the logarithm of (up to the Euler–Mascheroni constant). And if we recall the definition of the logarithm, although it grows very slowly, the logarithm of a number increases without bound. Therefore, the answer to our question is that
does not converge as we increase the number of terms.
So we conclude that even though each new term becomes extremely small, the overall sum still does not converge. This naturally leads us to ask the following question: does every sum involving infinitely many terms always diverge?
Bassel Problem
To explore our new question, let us modify our sum further and define
So in this case we are summing the squares of the reciprocals of the numbers from 1 up to . Here are the results:
It seems we have finally reached a convergent infinite sum. But what exact number does it converge to?
The problem was first posed in 1650 by the Italian mathematician Pietro Mengoli. Later, the Bernoulli brothers, especially Jacob Bernoulli, in Basel helped popularize it across Europe. They showed that the series converges but were unable to determine its exact value. The Basel problem soon became well known, and many leading mathematicians attempted to solve it, yet none could find the precise sum—until Euler.
Before we go further and present Euler’s famous proof, we need to introduce a few basic terms.
Polynomial
A polynomial is an expression built using addition, subtraction, multiplication, and powers of with nonnegative integer exponents, and it contains only finitely many terms. For example,
are polynomials, whereas is not.
Here may simply be treated as a symbol.
Function
A function is a rule that assigns to each element of a set exactly one element of a set
. The set
is called the domain, and the set
is called the codomain. We write this as
For example, consider the set , and the set
. We can map each element of
to an element of
by taking its square:
Indeed,
If we interpret a polynomial as a rule that takes a numerical input and produces a numerical output, it becomes a polynomial function. For instance, treating as a variable, the polynomial
defines the function
Limit
Sometimes a function is not defined at a certain point, but still makes sense as we get close to that point. For instance, consider
At , this becomes
, which is undefined. However, if we choose a small value such as
, we find
, and
for
, and so on. This suggests that the function approaches the value 1 as
approaches 0.
Similarly, if we look at the function , we observe that it approaches to zero as
, and the function
approaches to
, as
. All these facts can be shown using the classical geometric arguments on the unit circle.
These behaviors are described using limits, written as
Of course, when a function behaves nicely at a given point, its limit at that point is simply its value there. For example, as goes to 3, the function
takes the value
, so
Derivative
A derivative measures how sensitively a function changes when its input changes. Formally, it is defined as
For example, if , then
.
Similarly, the derivative of is
. In fact, for any function of the form
with
, the general rule is
In addition, the derivative of any constant is zero, since a constant does not change when
changes.
Now let us consider the derivative of the trigonometric function . Using the identity
we compute
which reads
where we use the two previously mentioned facts: and
as
.
Thus: . In a similar way, one can show that the derivative of
is
.
The ideas behind limits and derivatives took shape gradually. In the 17th century, Isaac Newton and Gottfried Wilhelm Leibniz independently developed calculus, introducing the derivative as a precise way to describe instantaneous change. Their work formalized methods earlier hinted at by ancient Greek mathematicians such as Archimedes, who used limiting processes to approximate areas and volumes. The modern definition of a limit, however, was not fully clarified until the 19th century by Augustin-Louis Cauchy and Karl Weierstrass, who gave calculus its rigorous foundations.
To demonstrate how Euler approached the Basel problem, let us first look at a polynomial function:
How can we identify the coefficients from the function itself? For example, evaluating at
immediately gives
What about
?
We take the derivative: so
Next, and
Thus we can write
This is the Maclaurin series of the function. The key idea is that the coefficients are determined by the derivatives at zero. This idea allows us to express even non-polynomial functions using an “infinite polynomial”.
Let us apply this idea to . We need its derivatives at zero. The fist derivative is
, and the next derivative is
Then, Next,
and the pattern repeats.
Evaluate at :
, and so on.
Now insert these into the Maclaurin formula:
So we have managed to write a non-polynomial function in a form that looks almost like a polynomial. However, the price we pay is that it contains infinitely many terms.
Euler’s Elegant Solution
Let us now recall another important feature of polynomials. Consider the polynomial
We can rewrite it as
Similarly,
Why is this factorization useful? Because it reveals the zeros (or roots) of the corresponding polynomial function: in the latter case, ,
, and
. These are exactly the values of
for which the function becomes zero.
Of course, it is not always easy to factor a polynomial just by looking at it. But the nice thing is that the process also works in reverse: if we already know the zeros of a polynomial, then we can immediately write down its factorization. Fortunately, there are several methods for finding the zeros (or roots) of a polynomial.
For example, suppose we know that a certain polynomial has zeros at ,
, and
. Then the polynomial must contain the factors
But there is an important point: we may multiply all of this by any non-zero constant, and the zeros will not change. Therefore, when we go in the reverse direction, we can introduce an additional constant, say . In essence,
and
have the same zeros, even though they are not the same polynomial.
This leads to a natural question: Can we write a non-polynomial function such as in a similar factored form?
If we know all the zeros, the answer is yes. We already know that . Since
is a periodic function, we also have
and so on.
Thus the zeros of are all integer multiples of
, positive and negative, including zero.
So we might guess that can be written as an infinite product over its zeros:
for some constant . We introduce
because multiplying a function by a constant does not change its zeros, as we discussed, and we determine its value later.
Let us expand each factor slightly:
and so on.
Thus the entire product becomes
Group the constant factors together:
Now use the identity to simplify each pair:
and so on.
Let us also absorb all constant factors into a single constant:
Then
Divide both sides by :
Here, we introduce a shorthand notation for an infinite product. Just as the symbol denotes a sum over many terms, the symbol
(the Greek capital letter Pi) denotes a product over many terms.
Now, we already know that
Taking the limit of the both sides as gives
so we conclude:
Thus Euler’s infinite product for the sine function is
If we multiply all of these factors together, the coefficient of comes from choosing the
term from exactly one of the factors and the constant term 1 from all the others. Namely, consider the first three terms
.
Therefore, the total coefficient of in the product is
.
Thus, in the infinite product, we get
Now let us compare this with the Maclaurin series of . We already know that
and hence
Now we have two different expressions for :
From the infinite product:
From the Maclaurin series:
Since these are two different ways of writing the same function, their coefficients must agree.
In particular, the coefficients of must be equal:
So we obtain
This is exactly the solution of the Basel problem: One of the most beautiful connections between infinite series and geometry (the number ) in all of mathematics.
In a similar way, by comparing the Maclaurin series of with its infinite product representation, we can also compute
This shows that Euler’s method not only solves the Basel problem but also reveals a deep pattern connecting even powers of reciprocals to corresponding powers of .
Euler’s solution to the Basel problem was so striking that it immediately made him a mathematical celebrity across Europe. His proof was unlike anything mathematicians had seen before—ingenious, daring, and unexpectedly effective. In the centuries that followed, the identity became one of the most frequently rediscovered results in mathematics. Dozens of new proofs were later found, using Fourier series, complex analysis, geometric methods, Parseval’s identity, and even probabilistic techniques. Each proof illuminates a different part of mathematics, and the Basel problem remains a classic example of how profound ideas can emerge from a deceptively simple infinite sum.
The Riemann Zeta Function
Now we are ready to introduce one of the most graceful functions in mathematics: the Riemann zeta function. So far, we have explored several infinite series, such as
These are all special cases of a single, more general function: the Riemann zeta function, denoted by
With this definition we see that ,
, and
.
This naturally leads to a new question: what happens if we choose other values for ? For instance, what does the series mean when
, or
or even
?
After solving the Basel problem, Euler began investigating the more general seriesfor other values of
. In doing so, he discovered extraordinary formulas linking these sums to the prime numbers, revealing structures that later became the foundations of analytic number theory.
More than a century later, in 1859, Bernhard Riemann, one of the greatest mathematicians in history, transformed the subject. In his eight-page paper Über die Anzahl der Primzahlen unter einer gegebenen Grösse (“On the Number of Primes Less Than a Given Magnitude”), he extended this series to complex values of, introduced analytic continuation, discovered the functional equation, and revealed a deep connection between the zeros of
and the distribution of prime numbers.
Riemann’s brief paper ended with a quiet remark about the location of these nontrivial zeros—a conjecture now known as the Riemann Hypothesis, widely regarded as the most important unsolved problem in mathematics.