math – update

blogging & searching for true math …

#9: Pure Awesomeness of the Fundamental Theorems of Mathematics

Leave a comment

tumblr_n3hhpxodo91qkjjfoo1_500

Q: In your opinion, which fundamental theorem is incredibly missing??…

  • Fundamental Theorem of Arithmetics

  • Fundamental Theorem of Algebra

  • First Fundamental Theorem of Calculus

A: Here we just have uniqueness for the answer!

The Fundamental Theorem of Arithmetics

The Fundamental Theorem of Arithmetics – Why isn’t the fundamental theorem of arithmetic obvious?

The fundamental theorem of arithmetic states that every positive integer can be factorized in one way as a product of prime numbers. This statement has to be appropriately interpreted: we count the factorizations 3\times 5\times 13 and 13\times 3\times 5 as the same, for instance. Note that it is essential not to count 1 as a prime, or else we could stick a product of 1s on to the end of any factorization to get a different one: 3\times 5\times 13=3\times 5\times 13\times 1\times 1\times 1. But doesn’t that mean that 1 itself cannot be written as a product of primes? No — we define the “empty product” (what you get when you take a bunch of … no numbers at all and multiply them together) to be 1. That is a sensible convention because we would like multiplying a product of numbers by the empty product not to make any change to the result.

That’s enough about what the fundamental theorem of arithmetic says. In this post I want to discuss the question of why it is a theorem at all. Isn’t it more like an observation? After all, given any number, we can simply work out its prime factorization.

Answer 1.If you think it’s obvious, then you’re probably assuming what you need to prove.

If you say, “we can simply work out its prime factorization,” you are already assuming that that factorization is unique. Otherwise, you would have had to say, “we can simply work out a prime factorization for it”. Of course, if you say it that way, it suddenly doesn’t seem quite as obvious that there’s only one. If you’re trying to argue that it’s obvious and you ever utter the phrase, “the prime factorization,” then you are begging the question, since implicit in those words is the assertion that there is only one prime factorization.

Answer 2.Just because you’ve got a completely deterministic method for working out a prime factorization, that doesn’t mean what you work out is the only prime factorization.

The following method is probably how you factorize a number: you divide it by 2 as many times as you can (which may be no times at all), then by 3, then by 5, and so on, keeping track of what you’ve done. For example, if your starting number is 575, then you can’t divide it by 2 or 3, but you can divide it by 5 to get 115, and then by 5 again to get 23, and then … well, you’ll probably know that 23 is prime, but you could also argue that since 5^2 is greater than 23 and you’ve checked 2 and 3, then it must be prime.

But just because that method always gives the same answer (the method in the abstract being to keep dividing by the smallest prime that goes into the number you see in front of you), that doesn’t mean that there might not be some other method that gives a different answer. For example, what if you looked for the largest prime that went into your number? You’re probably thinking that you’ll just get the same list of primes, but written backwards. But how do you know this? Obviously that’s what you’ll get if there’s only one way of writing the number as a product of primes, but that’s what we’re trying to prove. If there’s another way of writing it as a product of primes, then perhaps the largest prime in the other way of doing things is larger than the largest prime that results from the usual method.

Answer 3.Look, it just bloody well isn’t obvious, OK?

Sorry, I lost it for a moment there. But if you persist in thinking that it’s obvious, then perhaps you can tell me why it is obvious that 23\times 1759 is not the same number as 53\times 769. I’ll save you a little time by revealing that all of 23, 53, 769 and 1759 are prime. I will not accept as an answer that if you calculate those two products you get different results. That to me is an admission that it wasn’t obvious that the answers would be different. If it was obvious, then why bother to calculate them?

By the way, I’ll grant you that sometimes it’s obvious that two products of primes are different. For example, if 2 is involved in one product and not the other, then the first product is even and the second is odd. However, even that second assertion depends on the (simple) result that a product of odd numbers is odd. We’d be able to see instantly that 23\times 1759\ne 53\times 769 if we knew that a product of two non-multiples of 23 was always a non-multiple of 23. But is there an easy way of showing that? We can work out the multiplication table mod 23, but that’s a bit tedious. Alternatively, we can use some theory from the course — but unless you’re finding the course so easy that that theory (a proof derived from Euclid’s algorithm) is utterly obvious, then I don’t think you can call it obvious that 23\times 1759\ne 53\times 769.

Here’s another pair of products of primes for your delectation and delight: 47\times 863 and 73\times 557. Are they obviously different? It’s not clear which is bigger — they’re both a little over 40,000. What about the last digit? Damn, 1 in both cases. OK, let’s go for the second last digit, which is a bit of a cheat but still. In the first case we get the last digit of 4\times 3+7\times 6+2, which is 6. In the second case we get the last digit of 7\times 7+3\times 5+2, which is again 6. So we’ve got two numbers that are a little bit above 40,000 that both end 61. As it happens 47\times 863=40561 and 73\times 557=40661.

If you wanted a quicker demonstration that those two numbers are different, you could work out what they are mod 3, which is a lot easier than working them out completely. But that’s not going to work in general. For instance, it doesn’t work for the first example, where the smallest modulus for which they differ is 7. If I took two pairs of absolutely huge numbers (with millions of digits), I could get them to agree in almost all their digits and differ by a multiple of, say, 1000! And even if such small-modulus tests can be used, it isn’t obvious in advance that they will work if it isn’t obvious in advance that the products are different.

Answer 4.If it’s so obvious that every number has a unique factorization, then why is the corresponding statement false in a similar context?

Consider the collection of all numbers of the form a+b\sqrt{-5} where a and b are integers. (You might prefer to write these numbers as a+ib\sqrt{5}, but I prefer \sqrt{-5} for reasons that I don’t want to go into here, but might mention in a future post.)

These numbers have various properties in common with the integers: you can add them and multiply them, there are identities for both addition and multiplication, and every number has an additive inverse. And as with integers, if you divide one by another, you don’t always get a third, so the notion of divisibility makes sense too. That means that we could if we wanted try to define a notion of a “prime” number of the form a+b\sqrt{-5}.

Just before I try to do that, let’s quickly decide what we mean by a prime when we allow negative numbers. Presumably we’re going to want, say, -5 to be a prime, but what definition will lead to that? The small technical obstacle we face is that if we allow negative primes like that, then for a somewhat silly reason factorizations won’t be unique: for instance, 15=3\times 5=(-3)\times(-5). The usual approach to this is to divide numbers into three kinds: prime numbers, composite numbers, and units. A unit is a number that has a multiplicative inverse, so in \mathbb{Z} the units are 1 and -1. A prime is a number that cannot be written in the form ab unless exactly one of a and b is a unit. (I said “exactly” one because I didn’t want accidentally to define units themselves to be primes.) And now we can express the fundamental theorem of arithmetic in \mathbb{Z} by saying that every number has exactly one factorization into primes, except that we count two factorizations as the same if the only difference (apart from the order) is that the primes in one factorization are multiplied by units to give the primes in the other factorization. For example, we count 3\times 5 and (-5)\times(-3) as the same, since we can reorder the second factorization as (-3)\times(-5) and then multiply both primes by the unit -1 to get 3\times 5, which gives us the first factorization.

In short, what we’re saying is that if two products of primes don’t obviously give the same number, then they give different numbers.

Right, back to numbers of the form a+b\sqrt{-5}. Let’s check that 2 is a prime in this ring. (A ring in this context is, roughly speaking, an algebraic structure with addition and multiplication with all the usual axioms apart from the existence of multiplicative inverses. You can think of it as something a bit like \mathbb{Z}. However, the actual definition is a bit more general, as you can find out from the relevant Wikipedia article. Hmm, I’ve just looked at that article and I don’t like it at all: the list of examples is woefully inadequate. The important examples are eventually mentioned, but not in the list of basic examples, so you don’t get a good idea that they are the important ones.) First of all, 2 isn’t a unit, since 1/2 is not of the form a+b\sqrt{-5}. The modulus of a+b\sqrt{-5} is \sqrt{a^2+5b^2}, so if b\ne 0, then the modulus of a+b\sqrt{-5} is bigger than 2. It follows that the only way of writing 2 as a product of non-units would have to be to write it as a product of non-unit integers, which we can’t. So 2 is prime.

A similar check can be run for 3. So 2 and 3 are primes. It’s also possible to show that 1+\sqrt{-5} and 1-\sqrt{-5} are primes. But 2\times 3=(1+\sqrt{-5})(1-\sqrt{-5}), so 6 has a non-unique factorization into primes. (It’s also easy to see that you can’t multiply 2 or 3 by a unit to get one of 1\pm\sqrt{-5}.)

Why is this a problem for people who hold that the fundamental theorem of arithmetic is obvious? It’s because they have to explain what it is about \mathbb{Z} that is relevantly different from the ring of numbers of the form a+b\sqrt{-5}, which is denoted \mathbb{Z}(\sqrt{-5}). Why can’t we just translate any proof that works for \mathbb{Z} into a proof that works for \mathbb{Z}(\sqrt{-5})?

Here’s an example of how you can use \mathbb{Z}(\sqrt{-5}) to defeat somebody who claims that the result is obvious in \mathbb{Z}. Let’s take the argument that you can just work the factorization out by repeatedly dividing by the smallest prime that goes into your number. Well, you can do that in \mathbb{Z}(\sqrt{-5}) as well. Take 6, for instance. The smallest prime (in the sense of having smallest modulus) that goes into 6 is 2. Dividing by 2 we get 3, which is prime. So we’re done. So there can’t be another factorization. Except that there is another factorization. So the argument just isn’t an argument.

In a future post I’ll discuss the proof of the fundamental theorem of arithmetic. But this post is just to try to convince you (if you needed convincing, which you may not have) that the result is worth going to some effort to prove.

Source: Gower´s blog

 

Fundamental Theorem of Arithmetics

Euclid‘s Elements  – Essentially the statement and proof of the fundamental theorem

Proposition 30 is referred to as Euclid’s lemma. And it is the key in the proof of the fundamental theorem of arithmetic:

If two numbers by multiplying one another make some number, and any prime number measure the product, it will also measure one of the original numbers.

— Euclid, Elements Book VII, Proposition 30
Proposition 31 is proved directly by infinite descent:

Any composite number is measured by some prime number.

— Euclid, Elements Book VII, Proposition 31
Proposition 32 is derived from proposition 31, and prove that the decomposition is possible:

Any number either is prime or is measured by some prime number.

— Euclid, Elements Book VII, Proposition 32
Book IX, proposition 14 is derived from Book VII, proposition 30, and prove partially that the decomposition is unique – a point critically noted by André Weil.[7] Indeed, in this proposition the exponents are all equal to one, so nothing is said for the general case. Article 16 of GaussDisquisitiones Arithmeticae is an early modern statement and proof employing modular arithmetic:[1]

If a number be the least that is measured by prime numbers, it will not be measured by any other prime number except those originally measuring it.

— Euclid, Elements Book IX, Proposition 14

Existence Proof

We need to show that every integer greater than 1 is either prime or a product of primes. For the base case, note that 2 is prime. By induction: assume true for all numbers between 1 and n. If n is prime, there is nothing more to prove. Otherwise, there are integers a and b, where n = ab and 1 < ab < n. By the induction hypothesis, a = p1p2pj and b = q1q2qk are products of primes. But then n = ab = p1p2pjq1q2qk is a product of primes.

Uniqueness Proof

Assume that s > 1 is the product of prime numbers in two different ways:

{\begin{aligned}s&=p_{1}p_{2}\cdots p_{m}\\&=q_{1}q_{2}\cdots q_{n}.\end{aligned}}

We must show m = n and that the qj are a rearrangement of the pi.

By Euclid’s lemma, p1 must divide one of the qj; relabeling the qj if necessary, say that p1 divides q1. But q1 is prime, so its only divisors are itself and 1. Therefore, p1 = q1, so that

{\begin{aligned}{\frac {s}{p_{1}}}&=p_{2}\cdots p_{m}\\&=q_{2}\cdots q_{n}.\end{aligned}}

Reasoning the same way, p2 must equal one of the remaining qj. Relabeling again if necessary, say p2 = q2. Then

{\begin{aligned}{\frac {s}{p_{1}p_{2}}}&=p_{3}\cdots p_{m}\\&=q_{3}\cdots q_{n}.\end{aligned}}

This can be done for each of the mpi‘s, showing that mn and every pi is a qj. Applying the same argument with the p‘s and q‘s reversed shows nm (hence m = n) and every qj is a pi.

Canonical Representation of a Positive Integer

Every positive integer n > 1 can be represented in exactly one way as a product of prime powers:

n=p_{1}^{\alpha _{1}}p_{2}^{\alpha _{2}}\cdots p_{k}^{\alpha _{k}}=\prod _{i=1}^{k}p_{i}^{\alpha _{i}}

where p1 < p2 < … < pk are primes and the αi are positive integers. This representation is commonly extended to all positive integers, including one, by the convention that the empty product is equal to 1 (the empty product corresponds to k = 0).

Arithmetic Operations

The canonical representation, when it is known, is convenient for easily computing products, gcd, and lcm:

a\cdot b=2^{a_{2}+b_{2}}\,3^{a_{3}+b_{3}}\,5^{a_{5}+b_{5}}\,7^{a_{7}+b_{7}}\cdots =\prod p_{i}^{a_{p_{i}}+b_{p_{i}}},
\gcd(a,b)=2^{\min(a_{2},b_{2})}\,3^{\min(a_{3},b_{3})}\,5^{\min(a_{5},b_{5})}\,7^{\min(a_{7},b_{7})}\cdots =\prod p_{i}^{\min(a_{p_{i}},b_{p_{i}})},
\operatorname {lcm} (a,b)=2^{\max(a_{2},b_{2})}\,3^{\max(a_{3},b_{3})}\,5^{\max(a_{5},b_{5})}\,7^{\max(a_{7},b_{7})}\cdots =\prod p_{i}^{\max(a_{p_{i}},b_{p_{i}})}.

However, as Integer factorization of large integers is much harder than computing their product, gcd or lcm, these formulas have, in practice, a limited usage.

Generalizations

The first generalization of the theorem is found in Gauss’s second monograph (1832) on biquadratic reciprocity. This paper introduced what is now called the ring of Gaussian integers, the set of all complex numbersa + bi where a and b are integers. It is now denoted by Z [ i ] . He showed that this ring has the four units ±1 and ±i, that the non-zero, non-unit numbers fall into two classes, primes and composites, and that (except for order), the composites have unique factorization as a product of primes.[11]

In 1844 while working on cubic reciprocity, Eisenstein introduced the ring Z [ ω ], where

\omega ={\frac {-1+{\sqrt {-3}}}{2}},

satisfy  \omega ^{3}=1, i.e. is a cube root of unity. This is the ring of Eisenstein integers, and he proved it has the six units

\pm 1,\pm \omega ,\pm \omega ^{2}

and that it has unique factorization. However, it was also discovered that unique factorization does not always hold. An example is given by Z [ − 5 ]. In this ring one has[12]

  {\displaystyle 6=2\cdot 3=(1+{\sqrt {-5}})(1-{\sqrt {-5}}).}

Examples like this caused the notion of “prime” to be modified. In Z [ − 5 ] it can be proven that if any of the factors above can be represented as a product, e.g. 2 = ab, then one of a or b must be a unit. This is the traditional definition of “prime”. It can also be proven that none of these factors obeys Euclid’s lemma; e.g. 2 divides neither (1 + √−5) nor (1 − √−5) even though it divides their product 6. In algebraic number theory 2 is called irreducible in Z [ − 5 ] (only divisible by itself or a unit) but not prime in Z [ − 5 ] (if it divides a product it must divide one of the factors). The mention of Z [ − 5 ] is required because 2 is prime and irreducible in Z . Using these definitions it can be proven that in any ring a prime must be irreducible. Euclid’s classical lemma can be rephrased as “in the ring of integers Z every irreducible is prime”. This is also true in Z [ i ] and Z [ ω ], but not in Z [ − 5 ].

The rings in which factorization into irreducibles is essentially unique are called unique factorization domains. Important examples are polynomial rings over the integers or over a field, Euclidean domains and principal ideal domains.

In 1843 Kummer introduced the concept of ideal number, which was developed further by Dedekind (1876) into the modern theory of ideals, special subsets of rings. Multiplication is defined for ideals, and the rings in which they have unique factorization are called Dedekind domains.

Source: WikipediA

 

Fundamental Theorem of Algebra

The fundamental theorem of algebra states that every non-constant single-variable polynomial with complexcoefficients has at least one complex root. This includes polynomials with real coefficients, since every real number is a complex number with an imaginary part equal to zero.

Equivalently (by definition), the theorem states that the field of complex numbers is algebraically closed.

The theorem is also stated as follows: every non-zero, single-variable, degreen polynomial with complex coefficients has, counted with multiplicity, exactly n complex roots. The equivalence of the two statements can be proven through the use of successive polynomial division.

In spite of its name, there is no purely algebraic proof of the theorem, since any proof must use the completeness of the reals (or some other equivalent formulation of completeness), which is not an algebraic concept.

A first attempt at proving the theorem was made by d’Alembert in 1746, but his proof was incomplete. Among other problems, it assumed implicitly a theorem (now known as Puiseux’s theorem) which would not be proved until more than a century later, and furthermore the proof assumed the fundamental theorem of algebra. Other attempts were made by Euler (1749), de Foncenex (1759), Lagrange (1772), and Laplace (1795).

At the end of the 18th century, two new proofs were published which did not assume the existence of roots, but neither of which was complete. One of them, due to James Wood and mainly algebraic, was published in 1798 and it was totally ignored. Wood’s proof had an algebraic gap.[3] The other one was published by Gauss in 1799 and it was mainly geometric, but it had a topological gap, filled by Alexander Ostrowski in 1920, as discussed in Smale 1981 [3] (Smale writes, “…I wish to point out what an immense gap Gauss’ proof contained. It is a subtle point even today that a real algebraic plane curve cannot enter a disk without leaving. In fact even though Gauss redid this proof 50 years later, the gap remained. It was not until 1920 that Gauss’ proof was completed. In the reference Gauss, A. Ostrowski has a paper which does this and gives an excellent discussion of the problem as well…”).

A rigorous proof was first published by Argand in 1806 (and revisited in 1813);[4] it was here that, for the first time, the fundamental theorem of algebra was stated for polynomials with complex coefficients, rather than just real coefficients. Gauss produced two other proofs in 1816 and another version of his original proof in 1849.

None of the proofs mentioned so far is constructive. It was Weierstrass who raised for the first time, in the middle of the 19th century, the problem of finding a constructive proof of the fundamental theorem of algebra. He presented his solution, that amounts in modern terms to a combination of the Durand–Kerner method with the homotopy continuation principle, in 1891. Another proof of this kind was obtained by Hellmuth Kneser in 1940 and simplified by his son Martin Kneser in 1981.

Algebraic Proofs

These proofs use two facts about real numbers that require only a small amount of analysis (more precisely, the intermediate value theorem):

  • every polynomial with odd degree and real coefficients has some real root;
  • every non-negative real number has a square root.

The second fact, together with the quadratic formula, implies the theorem for real quadratic polynomials. In other words, algebraic proofs of the fundamental theorem actually show that if R is any real-closed field, then its extension C = R(√−1) is algebraically closed.

As mentioned above, it suffices to check the statement “every non-constant polynomial p(z) with real coefficients has a complex root”. This statement can be proved by induction on the greatest non-negative integer k such that 2k divides the degree n of p(z). Let a be the coefficient of zn in p(z) and let F be a splitting field of p(z) over C; in other words, the field F contains C and there are elements z1, z2, …, zn in F such that

  p(z)=a(z-z_{1})(z-z_{2})\cdots (z-z_{n}).

If k = 0, then n is odd, and therefore p(z) has a real root. Now, suppose that n = 2km (with m odd and k > 0) and that the theorem is already proved when the degree of the polynomial has the form 2k − 1m′ with m′ odd. For a real number t, define:

  q_{t}(z)=\prod _{1\leq i<j\leq n}\left(z-z_{i}-z_{j}-tz_{i}z_{j}\right).\,

Then the coefficients of qt(z) are symmetric polynomials in the zis with real coefficients. Therefore, they can be expressed as polynomials with real coefficients in the elementary symmetric polynomials, that is, in −a1, a2, …, (−1)nan. So qt(z) has in fact real coefficients. Furthermore, the degree of qt(z) is n(n − 1)/2 = 2k−1m(n − 1), and m(n − 1) is an odd number. So, using the induction hypothesis, qt has at least one complex root; in other words, zi + zj + tzizj is complex for two distinct elements i and j from {1, …, n}. Since there are more real numbers than pairs (i, j), one can find distinct real numbers t and s such that zi + zj + tzizj and zi + zj + szizj are complex (for the same i and j). So, both zi + zj and zizj are complex numbers. It is easy to check that every complex number has a complex square root, thus every complex polynomial of degree 2 has a complex root by the quadratic formula. It follows that zi and zj are complex numbers, since they are roots of the quadratic polynomial z2 −  (zi + zj)z + zizj.

Joseph Shipman showed in 2007 that the assumption that odd degree polynomials have roots is stronger than necessary; any field in which polynomials of prime degree have roots is algebraically closed (so “odd” can be replaced by “odd prime” and furthermore this holds for fields of all characteristics). For axiomatization of algebraically closed fields, this is the best possible, as there are counterexamples if a single prime is excluded. However, these counterexamples rely on −1 having a square root. If we take a field where −1 has no square root, and every polynomial of degree n ∈ I has a root, where I is any fixed infinite set of odd numbers, then every polynomial f(x) of odd degree has a root (since (x2 + 1)kf(x) has a root, where k is chosen so that deg(f) + 2k ∈ I). Mohsen Aliabadi generalized Shipman’s result for any field in 2013, proving that the sufficient condition for an arbitrary field (of any characteristic) to be algebraically closed is having a root for any polynomial of prime degree.[8]
Another algebraic proof of the fundamental theorem can be given using Galois theory. It suffices to show that C has no proper finite field extension.[9] Let K/C be a finite extension. Since the normal closure of K over R still has a finite degree over C (or R), we may assume without loss of generality that K is a normal extension of R (hence it is a Galois extension, as every algebraic extension of a field of characteristic 0 is separable). Let G be the Galois group of this extension, and let H be a Sylow 2-subgroup of G, so that the order of H is a power of 2, and the index of H in G is odd. By the fundamental theorem of Galois theory, there exists a subextension L of K/R such that Gal(K/L) = H. As [L:R] = [G:H] is odd, and there are no nonlinear irreducible real polynomials of odd degree, we must have L = R, thus [K:R] and [K:C] are powers of 2. Assuming by way of contradiction that [K:C] > 1, we conclude that the 2-group Gal(K/C) contains a subgroup of index 2, so there exists a subextension M of C of degree 2. However, C has no extension of degree 2, because every quadratic complex polynomial has a complex root, as mentioned above. This shows that [K:C] = 1, and therefore K = C, which completes the proof.

Source: WikipediA

Another Proof
Here is the proof of the equivalent statement “Every complex non-constant polynomial p is surjective”.
1) Let C be the finite set of critical points , i.e. p(z)=0 for all zC. C is finite by elementary algebra.
2) Remove p(C) from the codomain and call the resulting open set B and remove from the domain its inverse image p1(p(C)), and call the resulting open set A. Note that the inverse image is again finite.
3) Now you get an open map from A to B, which is also closed, because any polynomial is proper (inverse images of compact sets are compact). But B is connected and so p is surjective.
I like this proof because you can try it for real polynomials and it breaks down at step 3) because if you remove a single point from the line you disconnect it, while you can remove a finite set from a plane leaving it connected.
Source: mathoverflow

 

First Fundamental Theorem of Calculus

Let f be a continuous real-valued function defined on a closed interval [a, b]. Let F be the function defined, for all x in [a, b], by

F(x)=\int _{a}^{x}\!f(t)\,dt.

Then, F is uniformly continuous on [a, b], differentiable on the open interval (a, b), and

  F'(x)=f(x)\,

for all x in (a, b).

Alternatively, if f is merely Riemann integrable, then F is continuous on [a, b] (but not necessarily differentiable).

Corollary

The fundamental theorem is often employed to compute the definite integrals of a function f for which an antiderivativeF is known. Specifically, if f is a real-valued continuous function on [a, b], and F is an antiderivative of f in [a, b], then

\int _{a}^{b}f(t)\,dt=F(b)-F(a).

The corollary assumes continuity on the whole interval. This result is strengthened slightly in the following part of the theorem.

Second Fundamental Theorem of Calculus

Let f and F be real-valued functions defined on a closed interval [a, b] such that F is continuous on all [a,b] and the derivative of F is f for almost all [a, b]. That is, f and F are functions such that for all x in (a, b), except for perhaps a countable number of points in the interval:

  F'(x)=f(x).\

If f is Riemann integrable on [a, b] then

\int _{a}^{b}f(x)\,dx=F(b)-F(a).

The second part is somewhat stronger than the corollary because it does not assume that f is continuous.

When an antiderivative F exists, then there are infinitely many antiderivatives for f, obtained by adding an arbitrary constant to F. Also, by the first part of the theorem, antiderivatives of f always exist when f is continuous.

Source: WikipediA

 

Fundamental Theorem of Galois Theory

Explicit Description of the Correspondence

In its most basic form, the theorem asserts that given a field extension E/F that is finite and Galois, there is a one-to-one correspondence between its intermediate fields and subgroups of its Galois group. (Intermediate fields are fields K satisfying FKE; they are also called subextensions of E/F.)

For finite extensions, the correspondence can be described explicitly as follows.

  • For any subgroup H of Gal(E/F), the corresponding fixed field, denoted EH, is the set of those elements of E which are fixed by every automorphism in H.
  • For any intermediate field K of E/F, the corresponding subgroup is Aut(E/K), that is, the set of those automorphisms in Gal(E/F) which fix every element of K.

The fundamental theorem says that this correspondence is a one-to-one correspondence if (and only if) E/F is a Galois extension. For example, the topmost field E corresponds to the trivial subgroup of Gal(E/F), and the base field F corresponds to the whole group Gal(E/F).

The notation Gal(E/F) is only used for Galois extensions. If E/F is Galois, then Gal(E/F) = Aut(E/F). If E/F is not Galois, then the “correspondence” gives only an injective (but not surjective) map from { {\displaystyle \{} \{subgroups of Aut(E/F) } {\displaystyle \}} \} to { {\displaystyle \{} \{subfields of E/F } {\displaystyle \}} \}, and a surjective (but not injective) map in the reverse direction. In particular, if E/F is not Galois, then F is not the fixed field of any subgroup of Aut(E/F).

Properties of the Correspondence

The correspondence has the following useful properties.

  • It is inclusion-reversing. The inclusion of subgroups H1 ⊆ H2 holds if and only if the inclusion of fields EH1EH2 holds.
  • Degrees of extensions are related to orders of groups, in a manner consistent with the inclusion-reversing property. Specifically, if H is a subgroup of Gal(E/F), then |H| = [E:EH] and |Gal(E/F)/H| = [EH:F].
  • The field EH is a normal extension of F (or, equivalently, Galois extension, since any subextension of a separable extension is separable) if and only if H is a normal subgroup of Gal(E/F). In this case, the restriction of the elements of Gal(E/F) to EH induces an isomorphism between Gal(EH/F) and the quotient group Gal(E/F)/H.

Source: WikipediA

 

Fundamental theorem of Riemannian geometry

In Riemannian geometry, the fundamental theorem of Riemannian geometry states that on any Riemannian manifold (or pseudo-Riemannian manifold) there is a unique torsion-free metric connection, called the Levi-Civita connection of the given metric. Here a metric (or Riemannian) connection is a connection which preserves the metric tensor. More precisely:

Fundamental Theorem of Riemannian Geometry. Let (M, g) be a Riemannian manifold (or pseudo-Riemannian manifold). Then there is a unique connection ∇ which satisfies the following conditions:

  • for any vector fields X, Y, Z we have
∂ X ⟨ Y , Z ⟩ = ⟨ ∇ X Y , Z ⟩ + ⟨ Y , ∇ X Z ⟩ , {\displaystyle \partial _{X}\langle Y,Z\rangle =\langle \nabla _{X}Y,Z\rangle +\langle Y,\nabla _{X}Z\rangle ,} \partial _{X}\langle Y,Z\rangle =\langle \nabla _{X}Y,Z\rangle +\langle Y,\nabla _{X}Z\rangle ,
where ∂ X ⟨ Y , Z ⟩ {\displaystyle \partial _{X}\langle Y,Z\rangle } \partial _{X}\langle Y,Z\rangle denotes the derivative of the function ⟨ Y , Z ⟩ {\displaystyle \langle Y,Z\rangle } \langle Y,Z\rangle along vector field X.
  • for any vector fields X, Y,
∇ X Y − ∇ Y X = [ X , Y ] , {\displaystyle \nabla _{X}Y-\nabla _{Y}X=[X,Y],} \nabla _{X}Y-\nabla _{Y}X=[X,Y],
where [X, Y] denotes the Lie bracket for vector fieldsX, Y.

The first condition means that the metric tensor is preserved by parallel transport, while the second condition expresses the fact that the torsion of ∇ is zero.

An extension of the fundamental theorem states that given a pseudo-Riemannian manifold there is a unique connection preserving the metric tensor with any given vector-valued 2-form as its torsion. The difference between an arbitrary connection (with torsion) and the corresponding Levi-Civita connection is the contorsion tensor.

The following technical proof presents a formula for Christoffel symbols of the connection in a local coordinate system. For a given metric this set of equations can become rather complicated. There are quicker and simpler methods to obtain the Christoffel symbols for a given metric, e.g. using the action integral and the associated Euler-Lagrange equations.

Source: https://en.wikipedia.org/wiki/Fundamental_theorem_of_Riemannian_geometry

 

The Fundamental Theorem of Linear Algebra

Not everyone knows about the fundamental theorem of linear algebra, but there is an excellent 1993 article by Gil Strang that describes its importance. For an m x n matrix A, the theorem relates the dimensions of the row space of A (R(A)) and the nullspace of A (N(A)). The result is that dim(R(A)) + dim(N(A)) = n.

The theorem also describes four important subspaces and describes the geometry of A and At when thought of as linear transformations. The theorem shows that some subspaces are orthogonal to others. (Strang actually combines four theorems into his statement of the Fundamental Theorem, including a theorem that motivates the statistical practice of ordinary least squares.)

Source: WikipediA

 

The Fundamental Theorem of Statistics

Although most statistical textbooks do not single out a result as THE fundamental theorem of statistics, I can think of two results that could make a claim to the title. These results are based in probability theory, so perhaps they are more aptly named fundamental theorems of probability.

  • The Law of Large Numbers (LLN) provides the mathematical basis for understanding random events. The LLN says that if you repeat a trial many times, then the average of the observed values tend to be close to the expected value. (In general, the more trials you run, the better the estimates.) For example, you toss a fair die many times and compute the average of the numbers that appear. The average should converge to 3.5, which is the expected value of the roll because (1+2+3+4+5+6)/6 = 3.5. The same theorem ensures that about one-sixth of the faces are 1s, one-sixth are 2s, and so forth.
  • The Central Limit theorem (CLT) states that the mean of a sample of size n is approximately normally distributed when n is large. Perhaps more importantly, the CLT provides the mean and the standard deviation of the sampling distribution in terms of the sample size, the population mean μ, and the population variance σ2. Specifically, the sampling distribution of the mean is approximately normally distributed with mean μ and standard deviation σ/sqrt(n).

Of these, the Central Limit theorem gets my vote for being the Fundamental Theorem of Statistics. The LLN is important, but hardly surprising. It is the basis for frequentist statistics and assures us that large random samples tend to reflect the population. In contrast, the CLT is surprising because the sampling distribution of the mean is approximately normal regardless of the distribution of the original data! As a bonus, the CLT can be used computationally. It forms the basis for many statistical tests by estimating the accuracy of a statistical estimate. Lastly, the CLT connects important concepts in statistics: means, variances, sample size, and accuracy of point estimates.

Do you have a favorite “Fundamental Theorem”? Do you marvel at an applied theorem such as the fundamental theorem of linear programming or chuckle at a pseudo-theorems such as the fundamental theorem of software engineering? Share your thoughts in the comments.

https://en.wikipedia.org/wiki/Lindemann%E2%80%93Weierstrass_theorem

 

The Law of Large Numbers

Source: https://en.wikipedia.org/wiki/Law_of_large_numbers

 

Advertisements

Author: Math - Update

Updating Math In Our Mind & Heart!!...

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s