Top 10 Hard to Believe Math Theorems that Exist
In Physics or Chemistry, the laws made actually need to have some correlation with the physical world to be accepted, whereas regarding math, some mathematicians have dug deep enough to come up with some weird-looking mathematical theorems and statements, which they were able to prove. This is a list of such hard-to-believe theorems that exist in math.The truly remarkable aspect is not the elegant identity itself but the underlying formula:
[e^{ix} = cos(x) + isin(x)]
of which it is an immediate consequence.
Although this can be easily derived from Euler's formula using a unit circle in the complex plane, this equation seems hard to believe as it connects five fundamental constants in mathematics in a simple way:
1. The number 0
2. The number 1
3. The number π
4. The constant e (approximately 2.718)
5. The imaginary unit i
The remarkable thing about Fermat's conjecture is that, despite the unlikelihood that he actually proved it, it turned out to be true. Thank goodness for the lack of space in his margin - algebraic number theory was developed in attempts to prove this claim.
It seems like Fermat was contemplating the Pythagorean theorem, which is taught in middle school, and came up with his own extension to it.
This one becomes less mystifying once you see the proof. It's a corollary of Cauchy's generalization of the Mean Value Theorem.
The reason it's often misunderstood is that (a) students frequently use it when it doesn't apply (such as when the limit of the quotient of the derivatives does not exist in the real numbers) and (b) it can be a somewhat circular technique.
For instance, one can use L'Hôpital's Rule to determine that the limit as x approaches 0 of sin(x)/x is 1. But in order to do this, one must know that the derivative of sin(x) is cos(x). And to prove this, one must evaluate the original limit...
This is quite interesting - and I didn't know it.
In mathematics, we perform operations based on a set of basic rules called axioms. A system of axioms is considered better when it is set up in such a way that it is possible to prove every mathematical statement as "true" or "false," and no statement is left unproven. In this case, the system is said to be complete. It is also preferable to arrange the set of axioms so that no mathematical statement can be proven as both "true" and "false" by different axioms. In this case, the system is said to be consistent. Gödel successfully proved that no system can be both complete and consistent at the same time.
Remember in probability class when you learned that the probability of a statement lies between 0 and 1, inclusive? Well, a mathematician decided to look at Gödel's theorems in a not-so-binary manner and proved how nothing in this universe is either completely true or completely false. He then went on to explain how every statement has some element of uncertainty associated with it. When people started questioning his law, it further increased the element of uncertainty in his law and, in a broader sense, validated it. The interesting fact about this discovery, made by an unknown mathematician, is that even though it is self-proving and valid, it doesn't significantly change the current mathematical setup. One thing, however, has changed - the probability of an event is never exactly 0 or 1.
When set theory began gaining popularity in the 1900s, many misconceptions arose that could have been disastrous for the field. Russell, however, demonstrated that a set cannot contain anything subjective, such as the set of all good people, because good people is a subjective term. Consequently, sets that contain sets which include elements not supposed to be in a set render the larger set invalid. Confusing, huh?
This theorem states that a solid 3D ball in space can be decomposed into a number of disjoint subsets, which can be recombined into two identical solid 3D spheres of the original size each. This is based on the idea of considering a ball not as a typical solid but as a collection of points in space, acknowledging that there are infinite points between any two points in space. Though this has been proved using set-theoretic geometry, it seems counterintuitive since it goes against basic notions of geometry.
This theorem comes from a branch of mathematics known as topology and was discovered by Luitzen Brouwer. While its technical expression is quite abstract, it has many fascinating real-world implications. Let's say we have a picture (for example, the Mona Lisa) and we take a copy of it. We can then do whatever we want to this copy - make it bigger, make it smaller, rotate it, crumple it up, anything. Brouwer's Fixed Point Theorem says that if we put this copy on top of our original picture, there has to be at least one point on the copy that is exactly over the same point on the original. It could be part of Mona's eye, ear, or smile, but it has to exist.
This also works in three dimensions: imagine we have a glass of water, and we take a spoon and stir it up as much as we want. By this theorem, there will be at least one water molecule that is in the exact same place as it was before we stirred it.
You might have noticed how prime numbers tend to occur randomly in the list of numbers. It seems nearly impossible to find a pattern in which prime numbers occur. Yet, there is a theorem for that. It formalizes the intuitive idea that primes become less common as they become larger by precisely quantifying the rate at which this occurs. The theorem was proved independently by Jacques Hadamard and Charles Jean de la Vallée Poussin in 1896, using ideas introduced by Bernhard Riemann (in particular, the Riemann zeta function).
The first such distribution found is π(N) ~ N / log(N), where π(N) is the prime-counting function and log(N) is the natural logarithm of N. This means that for large enough N, the probability that a random integer not greater than N is prime is very close to 1 / log(N).
The Newcomers
It's quite fascinating to consider why this theorem only holds true for certain sets of numbers.