# Top 10 Hard to Believe Math Theorems that Exist

In Physics or Chemistry, the laws made actually need to have some correlation with the physical world to be accepted, whereas regarding math, some mathematicians have dug deep enough to come up with some weird-looking mathematical theorems and statements, which they were able to prove. This is a list of such hard-to-believe theorems that exist in math.### The Top Ten

**Euler's Identity [e^iπ + 1 = 0]**

Although this can be easily derived from Euler's formula with a unit circle in the Argand plane, this equation seems really hard to believe as it connects 5 fundamental constants in math in a really simple way -

1)The number 0

2)The number 1

3)The number π

4)The constant e (2.78)

5)The imaginary number I

The truly remarkable thing is not the pretty identity itself but the underlying formula

[ e^{ix} = cos{x} + I sin{x} ]

of which it's an immediate consequence.

**Fermat's Last Theorem [a^n + b^n = c^n has no solutions for n>2]**

The remarkable thing about Fermat's conjecture is that, given how unlikely it is that he actually proved it, it actually turned about to be true. Thank God for the lack of space in his margin---algebraic number theory was developed in attempts to prove this claim.

This guy was staring too hard at the pythagoras theorem given in middle school and came up with his own adjustment in the pythagoras theorem.

**Palindromic Polynomial Theorem**

This is quite interesting---and I didn't know it.

This is an interesting fact about polynomials. It’s also fairly useless which is probably why not many people are aware of it.

Take a polynomial equation and its roots. For example: If ax^3 + bx^2 + cx + d has roots p,q,r ; then a polynomial of the form ^3 + cx^2 + bx + a has roots 1/p, 1/q, 1/r respectively.

**L'Hopital's Rule**

This one becomes not so mystifying once you see the proof. It's a corollary of Cauchy's generalization of the Mean Value Theorem.

The reason that it's much maligned is that (a) students often use it when it doesn't apply (such as when the limit of the quotient of the derivatives does not exist in R) and (b) it can be a somewhat circular technique.

For instance, one can use L'Hospital's rule to determine that the limit as x approaches 0 of sin x / x is 1. But in order to do this, one must know that the derivative of sin x is cosine x. And in order to prove this, one must evaluate the original limit...

This rule is really famous among high school math students and by also being banned in high school exams at the same time. It basically states how the limit of a function divided by another function when x tends to some value a in an interval where both functions are differentiable is the same as the limit taken when the derivatives of both functions are taken instead of the functions themselves. It is really fascinating since it reduces the complexity of limit problems on a large scale.

Hard to believe indeed, but a lot of physics students such as myself take it as a divine miracle.

Excellent list, man!

**Göbel's Incompleteness Theorems**

In maths, we perform some operations based on a set of basic rules called axioms. A system of axioms is better when it is set up in such a way that using this system, it is possible to prove every mathematical statement as "true" or "false" and no mathematical statement is left 'unproved'. In this case, the system is said to be complete. It is also better to arrange the set of axioms in such a way that no mathematical statement can be proved as both "true" as well as "false" by 2 different axioms. In this case, the system is said to be consistent. This guy basically went full meta and succesfully proved that no system is both complete and consistent at the same time.

More Raymond Smullyan books are centered around this topic and make it more accessible

**The Probability Paradox**

Remember in probability class when you learned that the probability of a statement lies between 0 and 1, with 0 and 1 inclusive? Well, a mathematician decided to look at Gobel's theorems in 'not-so-binary' manner, and proved how nothing in this universe is either completely true or completely false. He then went on to explain how evey statement has some element of uncertainty associated with it, and when people started making reasons why his law can't be true, it further increased the element of uncertainty in his law and further validated the law in a broader sense, more meta stuff basically. The interesting fact about the discovery made by this mathematician whose name is unknown is that even though it is self-proving and valid at the same time, it doesn't really make any significant changes in the current mathematics setup. One thing however is changed - The probability of an event is never 0 or 1.

**Russell's Paradox**

When Set theory was starting to become popular in the 1900s, a lot of misconceptions began to spread about it, which would have proven to be disastrous to set theory, but Russell came up and stated how a set cannot contain anything - the set of all good people is not a set as 'good people' is an subjective term. So automatically, sets that contain sets which contain elements which aren't supposed to be contained in a set makes the bigger set invalid. Confusing, huh?

**Banach-Tarski Theorem**

This theorem states how a solid 3D ball in space can be decomposed into a number of disjoint subsets, which can be recombined into 2 identical solid 3D spheres of the previous size each. This has been proved on the idea by assuming a ball to not be a 'typical solid' but a collection of points in space, and how there are infinite points between 2 points in space. This, though been proved based on set-theoretic geometry, is really weird since it goes against the basic notion of geometry at the same time.

**Brouwer's Fixed Point Theorem**

This theorem comes from a branch of math known as Topology, and was discovered by Luitzen Brouwer. While its technical expression is quite abstract, it has many fascinating real world implications. Let’s say we have a picture (for example, the Mona Lisa) and we take a copy of it. We can then do whatever we want to this copy—make it bigger, make it smaller, rotate it, crumple it up, anything. Brouwer’s Fixed Point Theorem says that if we put this copy overtop of our original picture, there has to be at least one point on the copy that is exactly overtop the same point on the original. It could be part of Mona’s eye, ear, or possible smile, but it has to exist.

This also works in three dimensions: imagine we have a glass of water, and we take a spoon and stir it up as much as we want. By this theorem, there will be at least one water molecule that is in the exact same place as it was before we stirred it.

**Prime Number Theorem**

You might have noticed how prime numbers tend to occur randomly in the list of numbers. It seems nearly impossible to find a pattern in which prime numbers occur. Yet there is a theorem for that. It formalizes the intuitive idea that primes become less common as they become larger by precisely quantifying the rate at which this occurs. The theorem was proved independently by Jacques Hadamard and Charles Jean de la Vallée Poussin in 1896 using ideas introduced by Bernhard Riemann (in particular, the Riemann zeta function).

The first such distribution found is π ( N) ~ N / log( N), where π ( N) is the prime-counting function and log( N) is the natural logarithm of N. This means that for large enough N, the probability that a random integer not greater than N is prime is very close to 1 / log( N).

Ah yes, 2 is the 2nd most common value of the divisor number function (not sure how it's called in English) until 100, while only the 7th most common until 20 million.

### The Contenders

**Universal Chord Theorem**

It's quite fascinating to think about why this theorem only holds true for certain sets of numbers.

*B*Add New Item