## Saturday, February 14, 2015

### How to see Schrödinger’s Cat as half-alive

Schrödinger’s Cat is a thought experiment used to illustrate the weirdness of quantum mechanics. Namely, unlike classical Newtonian mechanics where a particle can only be in one state at a given time, quantum theory says that it can be in two or more states ‘at the same time’. The latter is often referred to as ‘superposition’ of states.
If D refers to the state that the cat is dead, and if A is the state that the cat is alive, then classically we can only have either A or D at any given time — we cannot have both states or neither.
In quantum theory, however, we can not only have states A and D but can also have many more in between' states, such as for example this combination or superposition state:
$\large \Psi = 0.8A + 0.6D$
Notice that the coefficient numbers 0.8 and 0.6 here have their squares adding up to exactly 1:
$\large (0.8)^2 + (0.6)^2 = 0.64 + 0.35 = 1.$
This is because these squares, (0.8)2 and (0.6)2, refer to the probabilities associated to states A and D, respectively. (The fact that this sum is 1 is what a 'normalized' wavefunction means.) Interpretation: there is a 64% chance that this mixed state $\Psi$ will collapse to the state A (cat is alive) and a 36% chance it will collapse to D (cat is dead) — when one proceeds to measure or find out the status of the cat were it to be in a quantum mechanical state described by $\Psi$.
Realistically, it is hard to comprehend that a cat can be so described by such a state. Well, maybe it isn’t that hard if we had some information about the probability of decay of the radioactive substance inside the box.
Nevertheless, I thought to share another related experiment where one could better ‘see’ and appreciate superposition states like $\Psi$ above. The great physicist Richard Feynman did a great job illustrating this with his use of the Stern-Gerlach experiment (which I will tell you about). (See chapters 5 and 6 of Volume III of the Feynman Lectures on Physics.)
In this experiment we have a magnetic field with north/south poles as shown. Then you have a beam of spin-half particles coming out of a furnace heading toward the magnetic field. The result is that the beam of particles will split into two beams. What essentially happened is that the magnetic field made a ‘measurement’ of the spins and some of them turned into spin-up particles (the upper half of the split beam) and the others into spin-down (the bottom beam). So the incoming beam from the furnace is like the cat being in the superposition state $\Psi$ and magnetic field is the agent that determined — measured! — whether the cat is alive (upper beam) or dead (lower beam). (Often in physics books they use the Dirac bra-ket' notation $|\uparrow\rangle$ for spin-up state and $\langle\downarrow|$ for spin-down.)

In a way, you can now see the initial beam emanating from the furnace as being in a superposition state.
Ok, so the superposition state of the initial beam has now collapsed the state of each particle into two specific states: spin-up state (upper beam) and the spin-down state (lower beam). Does this mean that these states are no longer superposition states?
Yes and No! They are no longer in superposition if the split beams enter another magnetic field that points in the same direction as the original one. If you pass the upper beam into a second identical magnetic field, it will remain an upper beam — and the same with the lower beam. The magnetic field ‘made a decision’ and it’s going to stick with it! :-)
That is why we call these states (upper and lower beams) ‘eigenstates’ of the original magnetic field. They are no longer mixed superposition states — the cat is either dead or alive as far as this field is concerned and not in any ‘in between fuzzy’ states.
Ok, that addresses the “Yes” part of the answer. Now for the “No” part.
Let’s suppose we have a different magnetic field, one just like the original one but perpendicular in direction to it. (So it’s like you’ve rotated the original field by 90 degrees; you can rotate by a different angle as well.)
In this case if you pass the original upper beam (that was split by the first magnetic field) into the second perpendicular field, this upper beam will split into two beams! So with respect to the second field the upper beam is now in a superposition state!
Essential Principle: the notion of superposition (in quantum theory) is always with respect to a variable (or observable, or quantity) that is being measured. In our case, that variable is the magnetic field, which in this case measures the spin direction of particles. (And here we have two magnetic fields, hence we have two different variables, observables — non-commuting variables as we say in quantum theory.)
Therefore, what was an eigenstate (collapsed state) for the first magnetic field (namely the the upper beam) is no longer an eigenstate (i.e., is no longer ‘collapsed’) for the second (perpendicular) magnetic field. Thus if a wavefunction is collapsed with respect to one field, it is not necessarily a collapsed wavefunction for the second magnetic field! A collapsed wavefunction for one variable could be a superposition wavefunction of another variable.
The Schrodinger Cat experiment could possibly be better understood not as one single experiment, but as a whole series of many (infinitely many) boxes with cats in them. This view might better relate to the fact that we have beam of particles each of which is being ‘measured’ by the field to determine its spin status (as being up or down).
Why do you say 'infinitely many'? Because, for example, when we say that a tossed coin lands heads or tails with probability 1/2, what we have to do to get that 1/2 is toss the coin a very large number of times and count the number of heads and divide that by the number of tosses. (If you toss it only 10 times, it's possible heads would land 6 or 4 times instead of exactly 5 - yet the probability would still be 1/2.)
Best wishes, Sam
Reference. R. Feynman, R. Leighton, M. Sands, The Feynman Lectures on Physics, Vol. III (Quantum Mechanics), chapters 5 and 6.

Postscript on determinism. It occurred to me to add a uniquely quantum mechanical feature that is contrary to classical physics thinking.   The Stern-Gerlach experiment is a good place to see this feature.
We noted that when the spin-half particles emerge from the furnace and into the magnetic field, they split into upper and lower beams. Classically, one might think that before entering the field individual particles already had their spins determined as either being spin up or spin down before a measurement takes place (i.e., before entering the magnetic field) — just as one might say that the earth already had a certain velocity as it moves around the sun before we measure it. Quantum theory does not see it that way. In the Copenhagen Interpretation of quantum theory (the predominant view), one cannot say a priori that a particle's spin was already either up or down before entering the field. (Recall, the magnetic field in this case is the agent making the spin direction measurement.) The reason we cannot say this is that if we had rotated the field at an angle (say at right angles to the original), the beams would still split into two, but not the same two beams as before! So we cannot say that the particles were already in either an up or down spin state. That is one of the strange features of quantum theory, but wonderfully illustrated by Stern-Gerlach.

Mathematics. In vector space theory there is a neat way to illustrate this quantum phenomenon by means of `bases’. For example, the vectors $(1,0)$ and $(0,1)$ form a basis for 2D Euclidean space $\Bbb R^2$. So any vector $(x,y)$ can be expressed (uniquely!) as a superposition of them; thus,
$\large (x,y) = x(1,0) + y(0,1).$
So this would be, to make an analogy, like how the beam of particles, described by (x,y), can be split into to beams — described by the basis vectors $(1,0)$ and $(0,1)$.
However, there are a myriad of other bases. For example, $(2,1)$ and $(1,1)$ also form a basis for $\Bbb R^2$. A general vector $(x,y)$ can still be expressed in terms of a superposition of these two:
$\large (x,y) = a(2,1) + b(1,1).$
for some constants $a$ and $b$ (which are easy to solve in terms of $x, y$). So this other basis could, by analogy, represent a magnetic field that is at an angle with respect to the original — and its associated beams $(2,1)$ and $(1,1)$ (it’s eigenstates!) would be different because of their different directions. As a matter of fact, we can see here that these eigenstates (collapsed states), represented by $(2,1)$ and $(1,1)$, are actual (uncollapsed) superpositions of the former two, namely $(1,0)$ and $(0,1)$. And vice versa!
Analogy: let’s suppose, to take a specific example, that the vector $(5,3)$ represents the particle states coming out of the furnace. Let’s think of the basis vectors $(1,0)$ and $(0,1)$ as representing the spin-up and spin-down beams, respectively, as beam enters the first magnetic field, and let the other basis vectors $(2,1)$ and $(1,1)$ represent the spin-up and spin-down beams as they enter the second perpendicular magnetic field. Then the particle state $(5,3)$ is in fact a superposition in each of these magnetic fields (bases)! This is so because
$\large (5,3) = 5(1,0) + 3(0,1)$
$\large (5,3) = 2(2,1) + 1(1,1).$

It is now quite conceivable that the initial mixed state of particles $(5,3)$ as they exit the furnace can in fact split in any number of ways as they enter any magnetic field! Depending on the orientation of the field. I.e., it’s not as though they were initially all either $(1,0),(0,1)$ or $(2,1),(1,1)$, but $(5,3)$ could be a simultaneous superposition of each pair. (In fact (5,3) is combination (superposition) in an infinite number of bases because you can have an infinite number of directions the field can point to.)
Indeed, it now looks like this strange feature of quantum theory can be described naturally from a mathematical perspective! Vector Space bases furnish a good example!
**********************************************************

## Saturday, November 29, 2014

### Letter to Hom on Christian Salvation

Alright, now we do a bit of theology! Aren't you excited? From math to quantum to cosmology to the heavenly Father of Jesus! Quite a healthy wide range of things for the spirit to grow, wouldn't you say? ;-)

This specific post/thread is for dear brother Hom on twitter who wrote me to discuss and share my thoughts on the issue of whether a Christian who is saved can lose or keep his/her salvation after numerous sins post their commitment to faith in Christ. In his tweets to me, Hom said:
"I was taught that in my early years and I "kinda" maintain that belief. Yet I know NO passages in the Biblical text that make the point clear. Can you help?"
The belief that Hom alluded to here is the view I expressed in our twitter discussion that there can be cases of alleged Christians who could 'lose' their salvation, or grace, by the continued sinful lives that they can lead subsequently -- even as replete that life may be with sins of egregious proportions (such as committing genocide, indulging in the major sins stated in the Bible, etc, without naming them all).

Some believers are of the view that once you have given your life to Christ as your Savior, you are also saved of your subsequent sins too, no matter how big or small they may be after your pronounced belief. Consequently, even if you live a life of crime, rape, murder, stealing, lying, and any sin you can imagine (so much for the Ten Commandments!), you are still saved because you cannot lose it. So consequently, Hitler, being a Christian would still be saved by grace even after all the inhumanity and genocide that he has caused to tens of millions of human beings. So, you are free to do as you please, sin to whatever degree that you wish (to whatever extreme), and you are assured that you are saved and will go to heaven.

Of course, I cannot accept such a view. (And I never have.) That is not at all how I read the Bible, especially the teachings of the New Testament as to how Christians should conduct their lives. (The Hebrew Scriptures already prescribe divine punishment to various of these sins even for alleged believers of the Mosaic community!) Indeed, St Paul has dedicated a fair amount of his time, travels, and writings to some churches in Asia Minor where he heard that numerous egregious sins continue to be committed by (alleged) Christian members who believed that they had been saved so that now they can do whatever they wanted. That is why St Paul explicitly emphasized:
"What shall we say, then? Shall we go on sinning so that grace may increase? By no means! We are those who have died to sin; how can we live in it any longer? Or don’t you know that all of us who were baptized into Christ Jesus were baptized into his death? We were therefore buried with him through baptism into death in order that, just as Christ was raised from the dead through the glory of the Father, we too may live a new life." (Romans 6:1-4, NIV.)
The Letter of James also teaches that a Christian is responsible to showing how his/her life conduct is to be exemplified through their behavior, actions, or works -- backed, of course, through their faith in Christ.

The Lord Jesus taught us to judge a tree by the fruit that it bears - very eloquently, simply, and without any complicated theology. When the tree produces bad fruit, what is to be done unto it? The Master said
"Every tree that does not bear good fruit is cut down and thrown into the fire. 20 Thus, by their fruit you will recognize them." (Matthew 7:19-20, NIV.)
Clearly, such a tree cannot then have been saved or ascribed salvation to begin with, even if that tree lead others to believe that it was a good tree.

Therefore, in extreme cases like Hitler, or anyone allegedly claiming to be a Christian, such continued bearing of bad fruit would at the very least cast serious doubt on their claims to being believers. Would we believe someone who claims allegiance to the US Constitution only to see that individual violate its articles and laws time and again (even in extreme ways)? I would certainly at least question them. So we're not saying that they had grace and then lost it, but that maybe they didn't have grace in the first place (as we assumed by taking their claim at face value).

There are many examples like the above in the Bible that tell me that the view I expressed is far more reasonable (or at least less problematic) than the view that the Christian community could be allowed to harbor such horrific individuals who do such harm to the faith. If Christians are serious about Jesus' teaching, they are responsible to act it out in their hearts and minds as well as with their fellow man, their neighbor. I hope that I have shared my thoughts with you, Hom, in a gentle spirit, even as I am no Bible teacher nor do I have a degree in theology! But I speak as just one believer, sharing my thoughts and experiences. Ultimately, Jesus knows the full precise answers. As St Paul said, we know in part and we prophecy in part, and in another place he says "For now we see through a glass, darkly."

Yours in Christ,
Sam

## Saturday, November 8, 2014

### A game with $\pi$

Here's an image of something I wrote down, took a photo of, and posted here for you.

It's a little game you can play with any irrational number. I took $\pi$ as an example.

You just learned about an important math concept/process called continued fraction expansions.

With it, you can get very precise rational number approximations for any irrational number to whatever degree of error tolerance you wish.

As an example, if you truncate the above last expansion where the 292 appears (so you omit the "1 over 292" part) you get the rational number $\frac{355}{113}$ which approximates $\pi$ to 6 decimal places. (Better than $\frac{22}{7}$.)

You can do the same thing for other irrational numbers like the square root of 2 or 3. You get their own sequences of whole numbers.

Exercise: for the square root of 2, show that the sequence you get is
1, 2, 2, 2, 2, ...
(all 2's after the 1). For the square root of 3 the continued fraction sequence is
1, 1, 2, 1, 2, 1, 2, 1, 2, ...
(so it starts with 1 and then the pair "1, 2" repeat periodically forever).

## Monday, August 18, 2014

### 3-4-5 complex number has Infinite order

This is a good exercise/challenge with complex numbers.

Consider the complex number  $Z = \large \frac35 + \frac45 i$. (Where $\large i = \sqrt{-1}$.)

Prove that $Z^n$ is never equal to 1 for any positive whole number $n = 1, 2, 3, 4, \dots$.

This complex number $Z$ comes from the familiar 3-4-5 right triangle that you all know: $3^2 + 4^2 = 5^2$.

In math we sometimes say that an object $X$ has "infinite order" when no positive power of it can be the identity (1, in this multiplicative case). For example, $i$ itself has finite order 4 since $i^4 = 1$, while 2 has infinite order since no positive power of 2 can be equal to 1. The distinct feature of $Z$ above is that it has modulus 1, so is on the unit circle $\mathbb T$ in the complex plane.

## Wednesday, July 30, 2014

### Multiplying Spaces!

Believe it or not, in Math we can not only multiply numbers but we can multiply spaces! We can multiply two spaces to get bigger spaces - usually of bigger dimensions.

The 'multiplication' that I'm referring to here is known as Tensor products. The things/objects in these spaces are called tensors. (Tensors are like vectors in a way.)

Albert Einstein used tensors in his Special and his General Theory of Relativity (his theory of gravity). Tensors are also used in several branches of Physics, like the theory of elasticity where various stresses and forces act in various ways. And definitely in quantum field theory.

It may sound crazy to say you can "multiply spaces," as we would multiply numbers, but it can be done in a precise and logical way. But here I will spare you the technical details and try to manage to show you the idea that makes it possible to do.

Q. What do you mean by 'spaces'?

I mean a set of things that behave like 'vectors' so that you can add two vectors and get a third vector, and where you can scale a vector by any real number. The latter is called scalar multiplication, so if $v$ is a vector, you can multiply it by $0.23$ or by $-300.87$ etc and get another vector: $0.23v$, $-300.87v$, etc.) The technical name is vector space.

A straight line that extends in both directions indefinitely would be a good example (an Euclidean line).

Another example is you take the $xy$-plane, 2D-space or simply 2-space, or you can take $xyz$-space, or if you like you can take $xyzt$-spacetime known also as Minkowski space which has 4 dimensions.

Q. How do you 'multiply' such spaces?

First, the notation. If $U$ and $V$ are spaces, their tensor product space is written as $U \otimes V$. (It's the multiplication symbol with a circle around it.)

If this is to be an actual multiplication of spaces there is one natural requirement we would want. That the dimensions of this tensor product space $U \otimes V$ should turn out to be the multiplication of the dimensions of U and of V.

So if $U$ has dimension 2 and $V$ has dimension 3, then $U \otimes V$ ought to have dimension $2 \times 3 = 6$.  And if $U$ and $V$ are straight lines, so each of dimension 1, then $U \otimes V$ will also be of dimension 1.

Q. Hey, wait a second, that doesn't quite answer my question. Are you dodging the issue!?

Ha! Yeah, just wanted to see if you're awake! ;-) And you are! Ok, here's the deal without going into too much detail. We pointed out above how you can scale vectors by real numbers. So if you have a vector $v$ from the space $V$ you can scale it by $0.23$ and get the vector $0.23v$. Now just imagine if we can scale the vector $v$ by the vectors in the other space $U$! So if $u$ is a vector from $U$ and $v$ a vector from $V$, then you can scale $v$ by $u$ to get what we call their tensor product which we usually write like

$u \otimes v$.

So with numbers used to scale vectors, e.g. $0.23v$, we could also write it as $0.23 \otimes v$. But we don't normally write it that way when numbers are involved, only when non-number vectors are.

Q. So can you also turn this around and refer to $u \otimes v$ as the vector $u$ scaled by the vector $v$?

Absolutely! So we have two approaches to this and you can show (by a proof) that the two approaches are in fact equivalent. In fact, that's what gives rise to a theorem that says

Theorem. $U \otimes V$ is isomorphic to $V \otimes U$.

(In Math, the word 'isomorphism' gives a precise meaning to what I mean by 'equivalent'.)

Anyway, the point has been made to describe multiplying spaces: you take their vectors and you 'scale' those of one space by the vectors of the other space.

There's a neat way to actually see and appreciate this if we use matrices as our vectors. (Yes, matrices can be viewed as vectors!) Matrices are called arrays in computer science.

One example / experiment should drive the point home:

Let's take these two $2 \times 2$ matrices $A$ and $B$:

$A = \begin{bmatrix} 2 & 3 \\ -1 & 5 \end{bmatrix}, \ \ \ \ \ \ \ B = \begin{bmatrix} -5 & 4 \\ 6 & 7 \end{bmatrix}$

To calculate their tensor product $A \otimes B$, you can take $B$ and scale it by each of the numbers contained in $A$! Like this:

$A\otimes B = \begin{bmatrix} 2B & 3B \\ -1B & 5B \end{bmatrix}$

If you write this out you will get a 4 x 4 matrix when you plug B into it:

$A\otimes B = \begin{bmatrix} -10 & 8 & -15 & 12 \\ 12 & 14 & 18 & 21 \\ 5 & -4 & -25 & 20 \\ -6 & -7 & 30 & 35 \end{bmatrix}$

Oh, and 4 times 4 is 16, yes so the matrix $A\otimes B$ does in fact have 16 entries in it! Check!

Q. You could also do this the other way, by scaling $A$ using each of the numbers in $B$, right?

Right! That would then give $B\otimes A$.

When you do this you will get different matrices/arrays but if you look closely you'll see that they have the very same set of numbers except that they're permuted around in a rather simple way.  How? Well, if you switch the two inner columns and the two inner rows of $B\otimes A$ you will get exactly $A\otimes B$!

Try this experiment with the above $A$ and $B$ examples by working out $B\otimes A$ as we've done. This illustrates what we mean in Math by 'isomorphism': that even though the results may look different, they are actually related to one another in a sort of 'linear' or 'algebraic' fashion.

Ok, that's enough. We get the idea. You can multiply spaces by scaling their vectors by each other. Amazing how such an abstract idea turns out to be a powerful tool in understanding the geometry of spaces, in Relativity Theory, and also in quantum mechanics (quantum field theory).

Warm Regards,
Sam

## Saturday, July 26, 2014

### Bertrand's "postulate" and Legendre's Conjecture

Bertrand's "postulate" states that for any positive integer $n > 1$, you can always find a prime number $p$ in the interval

$n < p < 2n$.

It use to be called "postulate" until it became a theorem when Chebyshev proved it in 1850.

(I saw this while browsing thru a group theory book and got interested to read up a little more.)

What if instead of looking at $n$ and $2n$ you looked at consecutive squares? So for example you take a positive integer $n$ and you ask whether we can always find at least one prime number between $n^2$ and $(n+1)^2$.

Turns out this is a much harder problem and it's still an open question called:

Legendre's Conjecture. For each positive integer $n$ there is at least one prime $p$ such that

$n^2 < p < (n+1)^2$.

People have used programming to check this for large numbers and have always found such primes, but no proof (or counterexample) is known.

If you compare Legendre's with Bertrand's you will notice that $(n+1)^2$ is a lot less than $2n^2$. (At least for $n > 2$.) In fact, the asymptotic ratio of the latter divided by the former is 2 (not 1) for large $n$'s. This shows that the range of numbers in the Legendre case is much narrower than in Bertrand's.

The late great mathematician Erdos proved similar results by obtaining k primes in certain ranges similar to Bertand's.

A deep theorem related to this is the Prime Number Theorem which gives an asymptotic approximation for the number of primes up to $x$. That approximating function is the well-known $x/\ln(x)$.

Great sources:
[1] Bertrand's "postulate"
[2] Legendre's Conjecture

## Friday, July 25, 2014

### Direct sum of finite cyclic groups

The purpose of this post is to show how a finite direct sum of finite cyclic groups

$\Large \Bbb Z_{m_1} \oplus \Bbb Z_{m_2} \oplus \dots \oplus \Bbb Z_{m_n}$

can be rearranged so that their orders are in increasing divisional form: $m_1|m_2|\dots | m_n$.

We use the fact that if $p, q$ are coprime, then $\large \Bbb Z_p \oplus \Bbb Z_q = \Bbb Z_{pq}$.

(We'll use equality $=$ for isomorphism $\cong$ of groups.)

Let $p_1, p_2, \dots p_k$ be the list of prime numbers in the prime factorizations of all the integers $m_1, \dots, m_n$.

Write each $m_j$ in its prime power factorization $\large m_j = p_1^{a_{j1}}p_2^{a_{j2}} \dots p_k^{a_{jk}}$. Therefore

$\Large \Bbb Z_{m_j} = \Bbb Z_{p_1^{a_{j1}}} \oplus \Bbb Z_{p_2^{a_{j2}}} \oplus \dots \oplus \Bbb Z_{p_k^{a_{jk}}}$

and so the above direct sum  $\large \Bbb Z_{m_1} \oplus \Bbb Z_{m_2} \oplus \dots \oplus \Bbb Z_{m_n}$ can be written out in matrix/row form as the direct sum of the following rows:

$\Large\Bbb Z_{p_1^{a_{11}}} \oplus \Bbb Z_{p_2^{a_{12}}} \oplus \dots \oplus \Bbb Z_{p_k^{a_{1k}}}$

$\Large\Bbb Z_{p_1^{a_{21}}} \oplus \Bbb Z_{p_2^{a_{22}}} \oplus \dots \oplus \Bbb Z_{p_k^{a_{2k}}}$
$\Large \vdots$
$\Large\Bbb Z_{p_1^{a_{n1}}} \oplus \Bbb Z_{p_2^{a_{n2}}} \oplus \dots \oplus \Bbb Z_{p_k^{a_{nk}}}$

Here, look at the powers of $p_1$ in the first column. They can be permuted / arranged so that their powers are in increasing order. The same with the powers of $p_2$ and the other $p_j$, arrange their groups so that the powers are increasing order. So we get the above direct sum isomorphic to

$\Large\Bbb Z_{p_1^{b_{11}}} \oplus \Bbb Z_{p_2^{b_{12}}} \oplus \dots \oplus \Bbb Z_{p_k^{b_{1k}}}$

$\Large\Bbb Z_{p_1^{b_{21}}} \oplus \Bbb Z_{p_2^{b_{22}}} \oplus \dots \oplus \Bbb Z_{p_k^{b_{2k}}}$
$\Large \vdots$
$\Large\Bbb Z_{p_1^{b_{n1}}} \oplus \Bbb Z_{p_2^{b_{n2}}} \oplus \dots \oplus \Bbb Z_{p_k^{b_{nk}}}$

where, for example, the exponents $b_{11} \le b_{21} \le \dots \le b_{n1}$ are a rearrangement of the numbers $a_{11}, a_{21}, \dots, a_{n1}$ (in the first column) in increasing order.  Do the same for the other columns.

Now put together each of these rows into cyclic groups by multiplying their orders, thus

$\Large\ \ \Bbb Z_{N_1}$
$\Large \oplus \Bbb Z_{N_2}$
$\Large \vdots$
$\Large \oplus \Bbb Z_{N_n}$

where

$\large N_1 = p_1^{b_{11}} p_2^{b_{12}} \dots p_k^{b_{1k}}$,
$\large N_2 = p_1^{b_{21}} p_2^{b_{22}} \dots p_k^{b_{2k}}$,
$\large \vdots$
$\large N_n = p_1^{b_{n1}} p_2^{b_{n2}} \dots p_k^{b_{nk}}$.

In view of the fact that the $b_{1j} \le b_{2j} \le \dots \le b_{nj}$ is increasing for each $j$, we see that $N_1 | N_2 | \dots | N_n$, as required. $\blacksquare$