Wednesday, July 30, 2014

Multiplying Spaces!

Believe it or not, in Math we can not only multiply numbers but we can multiply spaces! We can multiply two spaces to get bigger spaces - usually of bigger dimensions.

The 'multiplication' that I'm referring to here is known as Tensor products. The things/objects in these spaces are called tensors. (Tensors are like vectors in a way.)

Albert Einstein used tensors in his Special and his General Theory of Relativity (his theory of gravity). Tensors are also used in several branches of Physics, like the theory of elasticity where various stresses and forces act in various ways. And definitely in quantum field theory.

It may sound crazy to say you can "multiply spaces," as we would multiply numbers, but it can be done in a precise and logical way. But here I will spare you the technical details and try to manage to show you the idea that makes it possible to do.

Q. What do you mean by 'spaces'?

I mean a set of things that behave like 'vectors' so that you can add two vectors and get a third vector, and where you can scale a vector by any real number. The latter is called scalar multiplication, so if $v$ is a vector, you can multiply it by $0.23$ or by $-300.87$ etc and get another vector: $0.23v$, $-300.87v$, etc.) The technical name is vector space.

A straight line that extends in both directions indefinitely would be a good example (an Euclidean line).

Another example is you take the $xy$-plane, 2D-space or simply 2-space, or you can take $xyz$-space, or if you like you can take $xyzt$-spacetime known also as Minkowski space which has 4 dimensions.

Q. How do you 'multiply' such spaces?

First, the notation. If $U$ and $V$ are spaces, their tensor product space is written as $U \otimes V$. (It's the multiplication symbol with a circle around it.)

If this is to be an actual multiplication of spaces there is one natural requirement we would want. That the dimensions of this tensor product space $U \otimes V$ should turn out to be the multiplication of the dimensions of U and of V.

So if $U$ has dimension 2 and $V$ has dimension 3, then $U \otimes V$ ought to have dimension $2 \times 3 = 6$.  And if $U$ and $V$ are straight lines, so each of dimension 1, then $U \otimes V$ will also be of dimension 1.

Q. Hey, wait a second, that doesn't quite answer my question. Are you dodging the issue!?

Ha! Yeah, just wanted to see if you're awake! ;-) And you are! Ok, here's the deal without going into too much detail. We pointed out above how you can scale vectors by real numbers. So if you have a vector $v$ from the space $V$ you can scale it by $0.23$ and get the vector $0.23v$. Now just imagine if we can scale the vector $v$ by the vectors in the other space $U$! So if $u$ is a vector from $U$ and $v$ a vector from $V$, then you can scale $v$ by $u$ to get what we call their tensor product which we usually write like

$u \otimes v$.

So with numbers used to scale vectors, e.g. $0.23v$, we could also write it as $0.23 \otimes v$. But we don't normally write it that way when numbers are involved, only when non-number vectors are.

Q. So can you also turn this around and refer to $u \otimes v$ as the vector $u$ scaled by the vector $v$?

Absolutely! So we have two approaches to this and you can show (by a proof) that the two approaches are in fact equivalent. In fact, that's what gives rise to a theorem that says


Theorem. $U \otimes V$ is isomorphic to $V \otimes U$.

(In Math, the word 'isomorphism' gives a precise meaning to what I mean by 'equivalent'.)

Anyway, the point has been made to describe multiplying spaces: you take their vectors and you 'scale' those of one space by the vectors of the other space.

There's a neat way to actually see and appreciate this if we use matrices as our vectors. (Yes, matrices can be viewed as vectors!) Matrices are called arrays in computer science.

One example / experiment should drive the point home:

Let's take these two $2 \times 2$ matrices $A$ and $B$:

$A = \begin{bmatrix} 2 & 3 \\ -1 & 5 \end{bmatrix}, \ \ \ \ \ \ \  B = \begin{bmatrix} -5 & 4 \\ 6 & 7 \end{bmatrix}$

To calculate their tensor product $A \otimes B$, you can take $B$ and scale it by each of the numbers contained in $A$! Like this:

$A\otimes B = \begin{bmatrix} 2B & 3B \\ -1B & 5B \end{bmatrix}$



If you write this out you will get a 4 x 4 matrix when you plug B into it:



$A\otimes B = \begin{bmatrix} -10 & 8 & -15 & 12 \\ 12 & 14 & 18 & 21 \\ 5 & -4 & -25 & 20 \\ -6 & -7 & 30 & 35 \end{bmatrix}$

Oh, and 4 times 4 is 16, yes so the matrix $A\otimes B$ does in fact have 16 entries in it! Check!

Q. You could also do this the other way, by scaling $A$ using each of the numbers in $B$, right?

Right! That would then give $B\otimes A$.

When you do this you will get different matrices/arrays but if you look closely you'll see that they have the very same set of numbers except that they're permuted around in a rather simple way.  How? Well, if you switch the two inner columns and the two inner rows of $B\otimes A$ you will get exactly $A\otimes B$!

Try this experiment with the above $A$ and $B$ examples by working out $B\otimes A$ as we've done. This illustrates what we mean in Math by 'isomorphism': that even though the results may look different, they are actually related to one another in a sort of 'linear' or 'algebraic' fashion.

Ok, that's enough. We get the idea. You can multiply spaces by scaling their vectors by each other. Amazing how such an abstract idea turns out to be a powerful tool in understanding the geometry of spaces, in Relativity Theory, and also in quantum mechanics (quantum field theory).

Warm Regards,
Sam



Saturday, July 26, 2014

Bertrand's "postulate" and Legendre's Conjecture

Bertrand's "postulate" states that for any positive integer $n > 1$, you can always find a prime number $p$ in the interval

$n < p < 2n$.

It use to be called "postulate" until it became a theorem when Chebyshev proved it in 1850.

(I saw this while browsing thru a group theory book and got interested to read up a little more.)

What if instead of looking at $n$ and $2n$ you looked at consecutive squares? So for example you take a positive integer $n$ and you ask whether we can always find at least one prime number between $n^2$ and $(n+1)^2$.

Turns out this is a much harder problem and it's still an open question called:

Legendre's Conjecture. For each positive integer $n$ there is at least one prime $p$ such that

$n^2 < p < (n+1)^2$.

People have used programming to check this for large numbers and have always found such primes, but no proof (or counterexample) is known.

If you compare Legendre's with Bertrand's you will notice that $(n+1)^2$ is a lot less than $2n^2$. (At least for $n > 2$.) In fact, the asymptotic ratio of the latter divided by the former is 2 (not 1) for large $n$'s. This shows that the range of numbers in the Legendre case is much narrower than in Bertrand's.

The late great mathematician Erdos proved similar results by obtaining k primes in certain ranges similar to Bertand's.

A deep theorem related to this is the Prime Number Theorem which gives an asymptotic approximation for the number of primes up to $x$. That approximating function is the well-known $x/\ln(x)$.


Great sources:
[1] Bertrand's "postulate"
[2] Legendre's Conjecture
(See also wiki's entries under these topics.)





Friday, July 25, 2014

Direct sum of finite cyclic groups

The purpose of this post is to show how a finite direct sum of finite cyclic groups

$\Large \Bbb Z_{m_1} \oplus \Bbb Z_{m_2} \oplus \dots \oplus \Bbb Z_{m_n}$


can be rearranged so that their orders are in increasing divisional form: $m_1|m_2|\dots | m_n$.

We use the fact that if $p, q$ are coprime, then $\large \Bbb Z_p \oplus \Bbb Z_q = \Bbb Z_{pq}$.

(We'll use equality $=$ for isomorphism $\cong$ of groups.)

Let $p_1, p_2, \dots p_k$ be the list of prime numbers in the prime factorizations of all the integers $m_1, \dots, m_n$.

Write each $m_j$ in its prime power factorization $\large m_j = p_1^{a_{j1}}p_2^{a_{j2}} \dots p_k^{a_{jk}}$. Therefore

$\Large \Bbb Z_{m_j} = \Bbb Z_{p_1^{a_{j1}}} \oplus \Bbb Z_{p_2^{a_{j2}}} \oplus \dots \oplus \Bbb Z_{p_k^{a_{jk}}}$

and so the above direct sum  $\large \Bbb Z_{m_1} \oplus \Bbb Z_{m_2} \oplus \dots \oplus \Bbb Z_{m_n}$ can be written out in matrix/row form as the direct sum of the following rows:

$\Large\Bbb Z_{p_1^{a_{11}}} \oplus \Bbb Z_{p_2^{a_{12}}} \oplus \dots \oplus \Bbb Z_{p_k^{a_{1k}}}$

$\Large\Bbb Z_{p_1^{a_{21}}} \oplus \Bbb Z_{p_2^{a_{22}}} \oplus \dots \oplus \Bbb Z_{p_k^{a_{2k}}}$
$\Large \vdots$
$\Large\Bbb Z_{p_1^{a_{n1}}} \oplus \Bbb Z_{p_2^{a_{n2}}} \oplus \dots \oplus \Bbb Z_{p_k^{a_{nk}}}$

Here, look at the powers of $p_1$ in the first column. They can be permuted / arranged so that their powers are in increasing order. The same with the powers of $p_2$ and the other $p_j$, arrange their groups so that the powers are increasing order. So we get the above direct sum isomorphic to


$\Large\Bbb Z_{p_1^{b_{11}}} \oplus \Bbb Z_{p_2^{b_{12}}} \oplus \dots \oplus \Bbb Z_{p_k^{b_{1k}}}$

$\Large\Bbb Z_{p_1^{b_{21}}} \oplus \Bbb Z_{p_2^{b_{22}}} \oplus \dots \oplus \Bbb Z_{p_k^{b_{2k}}}$
$\Large \vdots$
$\Large\Bbb Z_{p_1^{b_{n1}}} \oplus \Bbb Z_{p_2^{b_{n2}}} \oplus \dots \oplus \Bbb Z_{p_k^{b_{nk}}}$

where, for example, the exponents $b_{11} \le b_{21} \le \dots \le b_{n1}$ are a rearrangement of the numbers $a_{11}, a_{21}, \dots, a_{n1}$ (in the first column) in increasing order.  Do the same for the other columns.

Now put together each of these rows into cyclic groups by multiplying their orders, thus

$\Large\ \ \Bbb Z_{N_1}$
$\Large \oplus \Bbb Z_{N_2}$
$\Large \vdots$
$\Large \oplus \Bbb Z_{N_n}$

where

$\large N_1 = p_1^{b_{11}} p_2^{b_{12}} \dots p_k^{b_{1k}}$,
$\large N_2 = p_1^{b_{21}} p_2^{b_{22}} \dots p_k^{b_{2k}}$,
$\large \vdots$
$\large N_n = p_1^{b_{n1}} p_2^{b_{n2}} \dots p_k^{b_{nk}}$.

In view of the fact that the $b_{1j} \le b_{2j} \le \dots \le b_{nj}$ is increasing for each $j$, we see that $N_1 | N_2 | \dots | N_n$, as required. $\blacksquare$









Latex on Blogger

LaTeX work here if you add the small script below:

Short exact sequence:
$\large 0 \to H \to G \to G/H \to 0$
and an integral
$\large \int \sqrt{x} dx$.

Each finite Abelian group is isomorphic to a direct sum of cyclic groups
$\large \Bbb Z_{m_1} \oplus \Bbb Z_{m_2} \oplus \dots \oplus \Bbb Z_{m_n}$
where $m_1|m_2|\dots | m_n$.

(One of my favorite results from group theory.)


Thanks to a gentle soul's responding at tex.stackexchange:

To get LaTeX to work on Blogger, go to Design, then to "edit HTML", then to "edit template". In the HTML file insert the following script right after where it says < head >:

<script type="text/javascript" src="http://cdn.mathjax.org/mathjax/latest/MathJax.js">
MathJax.Hub.Config({
 extensions: ["tex2jax.js","TeX/AMSmath.js","TeX/AMSsymbols.js"],
 jax: ["input/TeX", "output/HTML-CSS"],
 tex2jax: {
     inlineMath: [ ['$','$'], ["\\(","\\)"] ],
     displayMath: [ ['$$','$$'], ["\\[","\\]"] ],
 },
 "HTML-CSS": { availableFonts: ["TeX"] }
});
</script>