Monday 14 January 2013

G B G FAMOUSE

old photo of gbg

                        Gibran ghani yousafzia 

 Gibran ghani  was a ase ase -Canadian  artist, poet, and writer of poshto songs like rasha janana ....... Born in the town of warkadang kali in the north of modern-day Lambal , as a young boy  he immigrated with his family to the pakistan      , where ... Wikipedia

Gibran
BornJanuary 007,1999
Died again 3 times but still alive : April 10, 2013, canada 
Full nameJibran ghani yousafzia 
Parents: ghani family 
Siblings: shaba pasa manzala         His famous books like this 
Books

SATURDAY, MAY 3, 2008


Música/Music

A música é das melhores coisas do Mundo. Conforta, anima e amansa a tristeza. Está em todo o lado e nasce espontaneamente. Ao vivo, arrepia e torna-se inesquecível.

Music is one of the best things in the world. It comforts, cheers and soothes the sadness. It's everywhere and it is born spontaneously. Seen live, it gives the chills and becomes unforgettable.

FRIDAY, DECEMBER 28, 2007


A melhor coisa do Mundo é... *



respirar Lisboa!

Filipa Sousa

The best thing in the world is... to breathe Lisbon!

THURSDAY, DECEMBER 13, 2007




Buffet de sushi à sexta com o pessoal!

Sushi Buffet at fridays with friends!


Pedro Arcângelo, Portugal

THURSDAY, NOVEMBER 22, 2007


one of the best things in the world


Not your Grandfather's cereal!  Well, actually it probably is, if your Grandfather is British


Weetabix is the best thing in the world.


I know, it looks kind of like shredded wheat crossed with dried cow patty. But here are some reasons to venture forth into a hearty bowlful:

1.) The name: "Weetabix." It's the double "e"s that make it the best. Why, it practically begins with the word "whee!"

2.) You can choose what texture you want your Weetabix to be. Pour your milk of choice on top of the Weeta-biscuits and it will soak right in -- then you can mash it up into oatmeal-esque porridge, or spoon off soggy chunks. Pour said milkstuff around the biscuits, however, and note how only the edges get damp, while the interior portion remains crispy! This is sensational.

3.) Fortified with vitamins!

4.) Organic!

5.) According to their website, each serving of Weetabix contains 537 kilojoules of energy. I think that's enough to punch through a wall!

6.) Also from the website: "Crammed with all the natural goodness of wholegrain, you can almost taste the long hot summers and gentle spring rain in every bite, resulting in the softest, plumpest grain imaginable." Doesn't that just make you want to fuck?

7.) It's made in the UK and is in fact the best-selling cereal in England, so Jarvis Cocker probably eats it.
8.) The yellow box also gives it a vaguely IKEA-esque Swedish flair that brightens up your breakfast nook and makes you feel cosmopolitan.

Globetrotting jetsetters have a separate suitcase for Weetabix


9.) Weetabix had a "wheat art" contest, and here is the winner, by a Yorkshire UK farmer who calls it "Naughty Puppy."

Naughty Puppy


10.) Weetabix actually kind of doesn't taste like anything. It is the tabula rasa of cereals. A blank chalkboard upon which to scrawl your own individual gastronomic logarithm. An empty journal yearning only to be filled with one's florid personal poetry of milks, fruits and sugars. Whee!


One_11, USA



Swimming is the best thing in all the world!

Sunday 13 January 2013

what,s new

                            What,s  new by Ghani   

The determinant {\det(A)} of a square matrix {A} obeys a large number of important identities, the most basic of which is the multiplicativity property
\displaystyle  \det(AB) = \det(A) \det(B) \ \ \ \ \ (1)
whenever {A,B} are square matrices of the same dimension. This identity then generates many other important identities. For instance, if {A} is an {n \times m}matrix and {B} is an {m \times n} matrix, then by applying the previous identity to the{(n+m) \times (n+m)} square matrices {\begin{pmatrix} 1 & A \\ 0 & 1 \end{pmatrix}} and {\begin{pmatrix} 1 & 0 \\ B & 1 \end{pmatrix}} (where we will adopt the convention that {1} denotes an identity matrix of whatever dimension is needed to make sense of the expressions being computed, and similarly for {0}) we obtain the Sylvester determinant identity
\displaystyle  \det( 1 + AB ) = \det( 1 + BA ). \ \ \ \ \ (2)
This identity, which relates an {n \times n} determinant with an {m \times m}determinant, is very useful in random matrix theory (a point emphasised in particular by Deift), particularly in regimes in which {m} is much smaller than {n}.
Another identity generated from (1) arises when trying to compute the determinant of a {(n+m) \times (n+m)} block matrix
\displaystyle  \begin{pmatrix} A & B \\ C & D \end{pmatrix}
where {A} is an {n \times n} matrix, {B} is an {n \times m} matrix, {C} is an {m \times n} matrix, and {D} is an {m \times m} matrix. If {A} is invertible, then we can manipulate this matrix via block Gaussian elimination as
\displaystyle  \begin{pmatrix} A & B \\ C & D \end{pmatrix} = \begin{pmatrix} A & 0 \\ 0 & 1 \end{pmatrix} \begin{pmatrix} 1 & A^{-1} B \\ C & D \end{pmatrix}
\displaystyle  = \begin{pmatrix} A & 0 \\ 0 & 1 \end{pmatrix} \begin{pmatrix} 1 & 0 \\ C & 1 \end{pmatrix} \begin{pmatrix} 1 & A^{-1} B \\ 0 & D - C A^{-1} B \end{pmatrix}
and on taking determinants using (1) we obtain the Schur determinant identity
\displaystyle  \det \begin{pmatrix} A & B \\ C & D \end{pmatrix} = \det(A) \det( D - C A^{-1} B ) \ \ \ \ \ (3)
relating the determinant of a block-diagonal matrix with the determinant of theSchur complement {D-C A^{-1} B} of the upper left block {A}. This identity can be viewed as the correct way to generalise the {2 \times 2} determinant formula
\displaystyle  \det \begin{pmatrix} a & b \\ c & d \end{pmatrix} = ad-bc = a ( d - c a^{-1} b).
It is also possible to use determinant identities to deduce other matrix identities that do not involve the determinant, by the technique of matrix differentiation (or equivalently, matrix linearisation). The key observation is that near the identity, the determinant behaves like the trace, or more precisely one has
\displaystyle  \det( 1 + \epsilon A ) = 1 + \epsilon \hbox{tr}(A) + O(\epsilon^2) \ \ \ \ \ (4)
for any bounded square matrix {A} and infinitesimal {\epsilon}. (If one is uncomfortable with infinitesimals, one can interpret this sort of identity as an asymptotic as {\epsilon\rightarrow 0}.) Combining this with (1) we see that for square matrices {A,B} of the same dimension with {A} invertible and {A^{-1}, B} invertible, one has
\displaystyle  \det( A + \epsilon B ) = \det(A) \det(1 + \epsilon A^{-1} B )
\displaystyle = \det(A) (1 + \epsilon \hbox{tr}( A^{-1} B ) + O(\epsilon^2) )
for infinitesimal {\epsilon}. To put it another way, if {A(t)} is a square matrix that depends in a differentiable fashion on a real parameter {t}, then
\displaystyle  \frac{d}{dt} \det(A(t)) = \det(A(t)) \hbox{tr}( A(t)^{-1} \frac{d}{dt} A(t) )
whenever {A(t)} is invertible. (Note that if one combines this identity withcofactor expansion, one recovers Cramer’s rule.)
Let us see some examples of this differentiation method. If we take the Sylvester identity (2) and multiply one of the rectangular matrices {A} by an infinitesimal {\epsilon}, we obtain
\displaystyle  \det( 1 + \epsilon A B ) = \det( 1 + \epsilon B A);
applying (4) and extracting the linear term in {\epsilon} (or equivalently, differentiating at {\epsilon} and then setting {\epsilon=0}) we conclude the cyclic property of trace:
\displaystyle  \hbox{tr}(AB) = \hbox{tr}(BA).
To manipulate derivatives and inverses, we begin with the Neumann seriesapproximation
\displaystyle  (1 + \epsilon A)^{-1} = 1 - \epsilon A + O(\epsilon^2)
for bounded square {A} and infinitesimal {\epsilon}, which then leads to the more general approximation
\displaystyle  (A + \epsilon B)^{-1} = (1 + \epsilon A^{-1} B)^{-1} A^{-1}
\displaystyle  = A^{-1} - \epsilon A^{-1} B A^{-1} + O(\epsilon^2) \ \ \ \ \ (5)
for square matrices {A,B} of the same dimension with {B, A^{-1}} bounded. To put it another way, we have
\displaystyle  \frac{d}{dt} A(t)^{-1} = A(t)^{-1} (\frac{d}{dt} A(t)) A(t)^{-1}
whenever {A(t)} depends in a differentiable manner on {t} and {A(t)} is invertible.
We can then differentiate (or linearise) the Schur identity (3) in a number of ways. For instance, if we replace the lower block {D} by {D + \epsilon H} for some test {m \times m} matrix {H}, then by (4), the left-hand side of (3) becomes (assuming the invertibility of the block matrix)
\displaystyle  (\det \begin{pmatrix} A & B \\ C & D \end{pmatrix}) (1 + \epsilon \hbox{tr} \begin{pmatrix} A & B \\ C & D \end{pmatrix}^{-1} \begin{pmatrix} 0 & 0 \\ 0 & H \end{pmatrix} + O(\epsilon^2) )
while the right-hand side becomes
\displaystyle  \det(A) ( \det(D-BA^{-1}C) + \epsilon \hbox{tr}( (D-BA^{-1}C)^{-1} H ) + O(\epsilon^2) );
extracting the linear term in {\epsilon}, we conclude that
\displaystyle  \hbox{tr} (\begin{pmatrix} A & B \\ C & D \end{pmatrix}^{-1} \begin{pmatrix} 0 & 0 \\ 0 & H \end{pmatrix}) = \hbox{tr}( (D-BA^{-1}C)^{-1} H ).
As {H} was an arbitrary {m \times m} matrix, we conclude from duality that the lower right {m \times m} block of {\begin{pmatrix} A & B \\ C & D \end{pmatrix}^{-1}} is given by the inverse {(D-BA^{-1}C)^{-1}} of the Schur complement:
\displaystyle  \begin{pmatrix} A & B \\ C & D \end{pmatrix}^{-1} = \begin{pmatrix} ?? & ?? \\ ?? & (D-BA^{-1}C)^{-1} \end{pmatrix}.
One can also compute the other components of this inverse in terms of the Schur complement {D-BA^{-1} C} by a similar method (although the formulae become more complicated). As a variant of this method, we can perturb the block matrix in (3) by an infinitesimal multiple of the identity matrix giving
\displaystyle  \det \begin{pmatrix} A+\epsilon & B \\ C & D+\epsilon \end{pmatrix} = \det(A+\epsilon) \det( D +\epsilon - C (A+\epsilon)^{-1} B ). \ \ \ \ \ (6)
By (4), the left-hand side is
\displaystyle  (\det \begin{pmatrix} A & B \\ C & D \end{pmatrix}) (1 + \epsilon \hbox{tr} \begin{pmatrix} A & B \\ C & D \end{pmatrix}^{-1} + O(\epsilon^2) ).
From (5), we have
\displaystyle  D + \epsilon - C (A+ \epsilon)^{-1} B = D - C A^{-1} B + \epsilon(1 + C A^{-2} B) + O(\epsilon^2)
and so from (4) the right-hand side of (6) is
\displaystyle  \det(A) \det(D-CA^{-1} B) \times
\displaystyle  \times ( 1 + \epsilon (\hbox{tr}(A^{-1}) + \hbox{tr}( (D-CA^{-1} B)^{-1} (1 + C A^{-2} B)) ) + O(\epsilon^2) );
extracting the linear component in {\epsilon}, we conclude the identity
\displaystyle  \hbox{tr} \begin{pmatrix} A & B \\ C & D \end{pmatrix}^{-1} = \hbox{tr}(A^{-1}) + \hbox{tr}( (D-CA^{-1} B)^{-1} (1 + C A^{-2} B)) \ \ \ \ \ (7)
which relates the trace of the inverse of a block matrix, with the trace of the inverse of one of its blocks. This particular identity turns out to be useful in random matrix theory; I hope to elaborate on this in a later post.
As a final example of this method, we can analyse low rank perturbations {A+BC} of a large ({n \times n}) matrix {A}, where {B} is an {n \times m} matrix and {C} is an {m \times n} matrix for some {m<n}. (This type of situation is also common in random matrix theory, for instance it arose in this previous paper of mine on outliers to the circular law.) If {A} is invertible, then from (1) and (2) one has thematrix determinant lemma
\displaystyle  \det( A + BC ) = \det(A) \det( 1 + A^{-1} BC) = \det(A) \det(1 + CA^{-1} B);
if one then perturbs {A} by an infinitesimal matrix {\epsilon H}, we have
\displaystyle  \det( A + BC + \epsilon H ) = \det(A + \epsilon H ) \det(1 + C(A+\epsilon H)^{-1} B).
Extracting the linear component in {\epsilon} as before, one soon arrives at
\displaystyle  \hbox{tr}( (A+BC)^{-1} H ) = \hbox{tr}( A^{-1} H ) - \hbox{tr}( (1 + C A^{-1} B)^{-1} C A^{-1} H A^{-1} B )
assuming that {A} and {A+BC} are both invertible; as {H} is arbitrary, we conclude (after using the cyclic property of trace) the Sherman-Morrison formula
\displaystyle  (A+BC)^{-1} = A^{-1} - A^{-1} B (1 + C A^{-1} B)^{-1} C A^{-1}
for the inverse of a low rank perturbation {A+BC} of a matrix {A}. While this identity can be easily verified by direct algebraic computation, it is somewhat difficult to discover this identity by such algebraic manipulation; thus we see that the “determinant first” approach to matrix identities can make it easier to find appropriate matrix identities (particularly those involving traces and/or inverses), even if the identities one is ultimately interested in do not involve determinants. (As differentiation typically makes an identity lengthier, but also more “linear” or “additive”, the determinant identity tends to be shorter (albeit more nonlinear and more multiplicative) than the differentiated identity, and can thus be slightly easier to derive.)
Exercise 1 Use the “determinant first” approach to derive theWoodbury matrix identity (also known as the binomial inverse theorem)
\displaystyle  (A+BDC)^{-1} = A^{-1} - A^{-1} B (D^{-1} + CA^{-1} B)^{-1} C A^{-1}
where {A} is an {n \times n} matrix, {B} is an {n \times m} matrix, {C} is an {m \times n}matrix, and {D} is an {m \times m} matrix, assuming that {A}{D} and {A+BDC} are all invertible.
Mathematicians study a variety of different mathematical structures, but perhaps the structures that are most commonly associated with mathematics are the number systems, such as the integers {{\bf Z}} or the real numbers {{\bf R}}. Indeed, the use of number systems is so closely identified with the practice of mathematics that one sometimes forgets that it is possible to do mathematics without explicit reference to any concept of number. For instance, the ancient Greeks were able to prove many theorems in Euclidean geometry, well before the development of Cartesian coordinates and analytic geometry in the seventeenth century, or the formal constructions or axiomatisations of the real number system that emerged in the nineteenth century (not to mention precursor concepts such as zero or negative numbers, whose very existence was highly controversial, if entertained at all, to the ancient Greeks). To do this, the Greeks used geometric operations as substitutes for the arithmetic operations that would be more familiar to modern mathematicians. For instance, concatenation of line segments or planar regions serves as a substitute for addition; the operation of forming a rectangle out of two line segments would serve as a substitute for multiplication; the concept of similarity can be used as a substitute for ratios or division; and so forth.
A similar situation exists in modern physics. Physical quantities such as length, mass, momentum, charge, and so forth are routinely measured and manipulated using the real number system {{\bf R}} (or related systems, such as {{\bf R}^3}if one wishes to measure a vector-valued physical quantity such as velocity). Much as analytic geometry allows one to use the laws of algebra and trigonometry to calculate and prove theorems in geometry, the identification of physical quantities with numbers allows one to express physical laws and relationships (such as Einstein’s famous mass-energy equivalence {E=mc^2}) as algebraic (or differential) equations, which can then be solved and otherwise manipulated through the extensive mathematical toolbox that has been developed over the centuries to deal with such equations.
However, as any student of physics is aware, most physical quantities are not represented purely by one or more numbers, but instead by a combination of a number and some sort of unit. For instance, it would be a category error to assert that the length of some object was a number such as {10}; instead, one has to say something like “the length of this object is {10} yards”, combining both a number {10} and a unit (in this case, the yard). Changing the unit leads to a change in the numerical value assigned to this physical quantity, even though no physical change to the object being measured has occurred. For instance, if one decides to use feet as the unit of length instead of yards, then the length of the object is now {30} feet; if one instead uses metres, the length is now {9.144} metres; and so forth. But nothing physical has changed when performing this change of units, and these lengths are considered all equal to each other:
\displaystyle  10 \hbox{ yards } = 30 \hbox{ feet } = 9.144 \hbox{ metres}.
It is then common to declare that while physical quantities and units are not, strictly speaking, numbers, they should be manipulated using the laws of algebra as if they were numerical quantities. For instance, if an object travels {10} metres in {5} seconds, then its speed should be
\displaystyle  (10 m) / (5 s) = 2 ms^{-1}
where we use the usual abbreviations of {m} and {s} for metres and seconds respectively. Similarly, if the speed of light {c} is {c=299 792 458 ms^{-1}} and an object has mass {10 kg}, then Einstein’s mass-energy equivalence {E=mc^2}then tells us that the energy-content of this object is
\displaystyle  (10 kg) (299 792 458 ms^{-1})^2 \approx 8.99 \times 10^{17} kg m^2 s^{-2}.
Note that the symbols {kg, m, s} are being manipulated algebraically as if they were mathematical variables such as {x} and {y}. By collecting all these units together, we see that every physical quantity gets assigned a unit of a certaindimension: for instance, we see here that the energy {E} of an object can be given the unit of {kg m^2 s^{-2}} (more commonly known as a Joule), which has the dimension of {M L^2 T^{-2}} where {M, L, T} are the dimensions of mass, length, and time respectively.
There is however one important limitation to the ability to manipulate “dimensionful” quantities as if they were numbers: one is not supposed to add, subtract, or compare two physical quantities if they have different dimensions, although it is acceptable to multiply or divide two such quantities. For instance, if {m} is a mass (having the units {M}) and {v} is a speed (having the units {LT^{-1}}), then it is physically “legitimate” to form an expression such as {\frac{1}{2} mv^2}, but not an expression such as {m+v} or {m-v}; in a similar spirit, statements such as {m=v} or {m\geq v} are physically meaningless. This combines well with the mathematical distinction between vector, scalar, and matrix quantities, which among other things prohibits one from adding together two such quantities if their vector or matrix type are different (e.g. one cannot add a scalar to a vector, or a vector to a matrix), and also places limitations on when two such quantities can be multiplied together. A related limitation, which is not always made explicit in physics texts, is that transcendental mathematical functions such as {\sin} or {\exp} should only be applied to arguments that are dimensionless; thus, for instance, if {v} is a speed, then {\hbox{arctanh}(v)} is not physically meaningful, but {\hbox{arctanh}(v/c)} is (this particular quantity is known as the rapidity associated to this speed).
These limitations may seem like a weakness in the mathematical modeling of physical quantities; one may think that one could get a more “powerful” mathematical framework if one were allowed to perform dimensionally inconsistent operations, such as add together a mass and a velocity, add together a vector and a scalar, exponentiate a length, etc. Certainly there is some precedent for this in mathematics; for instance, the formalism of Clifford algebras does in fact allow one to (among other things) add vectors with scalars, and in differential geometry it is quite common to formally apply transcendental functions (such as the exponential function) to a differential form (for instance, the Liouville measure {\frac{1}{n!} \omega^n} of a symplectic manifold can be usefully thought of as a component of the exponential {\exp(\omega)} of the symplectic form {\omega}).
However, there are several reasons why it is advantageous to retain the limitation to only perform dimensionally consistent operations. One is that of error correction: one can often catch (and correct for) errors in one’s calculations by discovering a dimensional inconsistency, and tracing it back to the first step where it occurs. Also, by performing dimensional analysis, one can often identify the form of a physical law before one has fully derived it. For instance, if one postulates the existence of a mass-energy relationship involving only the mass of an object {m}, the energy content {E}, and the speed of light {c}, dimensional analysis is already sufficient to deduce that the relationship must be of the form {E = \alpha mc^2} for some dimensionless absolute constant {\alpha}; the only remaining task is then to work out the constant of proportionality {\alpha}, which requires physical arguments beyond that provided by dimensional analysis. (This is a simple instance of a more general application of dimensional analysis known as the Buckingham {\pi} theorem.)
The use of units and dimensional analysis has certainly been proven to be very effective tools in physics. But one can pose the question of whether it has a properly grounded mathematical foundation, in order to settle any lingering unease about using such tools in physics, and also in order to rigorously develop such tools for purely mathematical purposes (such as analysing identities and inequalities in such fields of mathematics as harmonic analysis or partial differential equations).
The example of Euclidean geometry mentioned previously offers one possible approach to formalising the use of dimensions. For instance, one could model the length of a line segment not by a number, but rather by the equivalence class of all line segments congruent to the original line segment (cf. the Frege-Russell definition of a number). Similarly, the area of a planar region can be modeled not by a number, but by the equivalence class of all regions that areequidecomposable with the original region (one can, if one wishes, restrict attention here to measurable sets in order to avoid Banach-Tarski-type paradoxes, though that particular paradox actually only arises in three and higher dimensions). As mentioned before, it is then geometrically natural to multiply two lengths to form an area, by taking a rectangle whose line segments have the stated lengths, and using the area of that rectangle as a product. This geometric picture works well for units such as length and volume that have a spatial geometric interpretation, but it is less clear how to apply it for more general units. For instance, it does not seem geometrically natural (or, for that matter, conceptually helpful) to envision the equation {E=mc^2} as the assertion that the energy {E} is the volume of a rectangular box whose height is the mass {m} and whose length and width is given by the speed of light {c}.
But there are at least two other ways to formalise dimensionful quantities in mathematics, which I will discuss below the fold. The first is a “parametric” model in which dimensionful objects are modeled as numbers (or vectors, matrices, etc.) depending on some base dimensional parameters (such as units of length, mass, and time, or perhaps a coordinate system for space or spacetime), and transforming according to some representation of a structure group that encodes the range of these parameters; this type of “coordinate-heavy” model is often used (either implicitly or explicitly) by physicists in order to efficiently perform calculations. The second is an “abstract” model in which dimensionful objects now live in an abstract mathematical space (e.g. an abstract vector space), in which only a subset of the operations available to general-purpose number systems such as {{\bf R}} or {{\bf R}^3} are available, namely those operations which are “dimensionally consistent” or invariant (or more precisely, equivariant) with respect to the action of the underlying structure group. This sort of “coordinate-free” approach tends to be the one which is preferred by pure mathematicians, particularly in the various branches of modern geometry, in part because it can lead to greater conceptual clarity, as well as results of great generality.
Things are pretty quiet here during the holiday season, but one small thing I have been working on recently is a set of notes on special relativity that I will be working through in a few weeks with some bright high school students here at our local math circle.  I have only two hours to spend with this group, and it is unlikely that we will reach the end of the notes (in which I derive the famous mass-energy equivalence relation E=mc^2, largely following Einstein’s original derivation as discussed in this previous blog post); instead we will probably spend a fair chunk of time on related topics which do not actually require special relativity per se, such as spacetime diagrams, the Doppler shift effect, and an analysis of my airport puzzle.  This will be my first time doing something of this sort (in which I will be spending as much time interacting directly with the students as I would lecturing);  I’m not sure exactly how it will play out, being a little outside of my usual comfort zone of undergraduate and graduate teaching, but am looking forward to finding out how it goes.   (In particular, it may end up that the discussion deviates somewhat from my prepared notes.)
The material covered in my notes is certainly not new, but I ultimately decided that it was worth putting up here in case some readers here had any corrections or other feedback to contribute (which, as always, would be greatly appreciated).
[Dec 24: notes updated, in response to comments.]
I’ve just uploaded to the arXiv my paper “Mixing for progressions in non-abelian groups“, submitted to Forum of Mathematics, Sigma (which, along with sister publication Forum of Mathematics, Pi, has just opened up its online submission system). This paper is loosely related in subject topic to my two previous papers on polynomial expansion and on recurrence in quasirandom groups (with Vitaly Bergelson), although the methods here are rather different from those in those two papers. The starting motivation for this paper was a question posed in this foundational paper of Tim Gowers on quasirandom groups. In that paper, Gowers showed (among other things) that if {G} was a quasirandom group, patterns such as {(x,xg,xh, xgh)} were mixing in the sense that, for any four sets {A,B,C,D \subset G}, the number of such quadruples{(x,xg,xh, xgh)} in {A \times B \times C \times D} was equal to {(\mu(A) \mu(B) \mu(C) \mu(D) + o(1)) |G|^3}, where {\mu(A) := |A| / |G|}, and {o(1)}denotes a quantity that goes to zero as the quasirandomness of the group goes to infinity. In my recent paper with Vitaly, we also considered mixing properties of some other patterns, namely {(x,xg,gx)} and {(g,x,xg,gx)}. This paper is concerned instead with the pattern {(x,xg,xg^2)}, that is to say a geometric progression of length three. As observed by Gowers, by applying (a suitably quantitative version of) Roth’s theorem in (cosets of) a cyclic group, one can obtain a recurrence theorem for this pattern without much effort: if {G}is an arbitrary finite group, and {A} is a subset of {G} with {\mu(A) \geq \delta}, then there are at least {c(\delta) |G|^2} pairs {(x,g) \in G} such that {x, xg, xg^2 \in A}, where {c(\delta)>0} is a quantity depending only on {\delta}. However, this argument does not settle the question of whether there is a stronger mixing property, in that the number of pairs {(x,g) \in G^2} such that {(x,xg,xg^2) \in A \times B \times C} should be {(\mu(A)\mu(B)\mu(C)+o(1)) |G|^2} for any {A,B,C \subset G}. Informally, this would assert that for {x, g} chosen uniformly at random from {G}, the triplet {(x, xg, xg^2)}should resemble a uniformly selected element of {G^3} in some weak sense.
For non-quasirandom groups, such mixing properties can certainly fail. For instance, if {G} is the cyclic group {G = ({\bf Z}/N{\bf Z},+)} (which is abelian and thus highly non-quasirandom) with the additive group operation, and {A = \{1,\ldots,\lfloor \delta N\rfloor\}} for some small but fixed {\delta > 0}, then {\mu(A) = \delta + o(1)} in the limit {N \rightarrow \infty}, but the number of pairs {(x,g) \in G^2} with {x, x+g, x+2g \in A} is {(\delta^2/2 + o(1)) |G|^2} rather than {(\delta^3+o(1)) |G|^2}. The problem here is that the identity {(x+2g) = 2(x+g) - x} ensures that if {x}and {x+g} both lie in {A}, then {x+2g} has a highly elevated likelihood of also falling in {A}. One can view {A} as the preimage of a small ball under the one-dimensional representation {\rho: G \rightarrow U(1)} defined by {\rho(n) := e^{2\pi i n/N}}; similar obstructions to mixing can also be constructed from other low-dimensional representations.
However, by definition, quasirandom groups do not have low-dimensional representations, and Gowers asked whether mixing for {(x,xg,xg^2)} could hold for quasirandom groups. I do not know if this is the case for arbitrary quasirandom groups, but I was able to settle the question for a specific class of quasirandom groups, namely the special linear groups {G := SL_d(F)} over a finite field {F} in the regime where the dimension {d} is bounded (but is at least two) and {F} is large. Indeed, for such groups I can obtain a count of {(\mu(A) \mu(B) \mu(C) + O( |F|^{-\min(d-1,2)/8} )) |G|^2} for the number of pairs {(x,g) \in G^2} with {(x, xg, xg^2) \in A \times B \times C}. In fact, I have the somewhat stronger statement that there are {(\mu(A) \mu(B) \mu(C) \mu(D) + O( |F|^{-\min(d-1,2)/8} )) |G|^2} pairs {(x,g) \in G^2} with {(x,xg,xg^2,g) \in A \times B \times C \times D} for any {A,B,C,D \subset G}.
I was also able to obtain a partial result for the length four progression {(x,xg,xg^2, xg^3)} in the simpler two-dimensional case {G = SL_2(F)}, but I had to make the unusual restriction that the group element {g \in G} was hyperbolicin the sense that it was diagonalisable over the finite field {F} (as opposed to diagonalisable over the algebraic closure {\overline{F}} of that field); this amounts to the discriminant of the matrix being a quadratic residue, and this holds for approximately half of the elements of {G}. The result is then that for any {A,B,C,D \subset G}, one has {(\frac{1}{2} \mu(A) \mu(B) \mu(C) \mu(D) + o(1)) |G|^2} pairs {(x,g) \in G} with {g} hyperbolic and {(x,xg,xg^2,xg^3) \subset A \times B \times C \times D}. (Again, I actually show a slightly stronger statement in which {g} is restricted to an arbitrary subset {E} of hyperbolic elements.)
For the length three argument, the main tools used are the Cauchy-Schwarz inequality, the quasirandomness of {G}, and some algebraic geometry to ensure that a certain family of probability measures on {G} that are defined algebraically are approximately uniformly distributed. The length four argument is significantly more difficult and relies on a rather ad hoc argument involving, among other things, expander properties related to the work of Bourgain and Gamburd, and also a “twisted” version of an argument of Gowers that is used (among other things) to establish an inverse theorem for the {U^3} norm.
I give some details of these arguments below the fold.
Perhaps the most important structural result about general large dense graphs is the Szemerédi regularity lemma. Here is a standard formulation of that lemma:
Lemma 1 (Szemerédi regularity lemma) Let {G = (V,E)} be a graph on {n} vertices, and let {\epsilon > 0}. Then there exists a partition {V = V_1 \cup \ldots \cup V_M} for some {M \leq M(\epsilon)} with the property that for all but at most {\epsilon M^2} of the pairs {1 \leq i \leq j \leq M}, the pair {V_i, V_j} is {\epsilon}-regular in the sense that
\displaystyle  | d( A, B ) - d( V_i, V_j ) | \leq \epsilon
whenever {A \subset V_i, B \subset V_j} are such that {|A| \geq \epsilon |V_i|} and {|B| \geq \epsilon |V_j|}, and {d(A,B) := |\{ (a,b) \in A \times B: \{a,b\} \in E \}|/|A| |B|} is the edge density between {A} and {B}. Furthermore, the partition is equitablein the sense that {||V_i| - |V_j|| \leq 1} for all {1 \leq i \leq j \leq M}.
There are many proofs of this lemma, which is actually not that difficult to establish; see for instance these previous blog posts for some examples. In this post I would like to record one further proof, based on the spectral decomposition of the adjacency matrix of {G}, which is essentially due to Friezeand Kannan. (Strictly speaking, Frieze and Kannan used a variant of this argument to establish a weaker form of the regularity lemma, but it is not difficult to modify the Frieze-Kannan argument to obtain the usual form of the regularity lemma instead. Some closely related spectral regularity lemmas were also developed by Szegedy.) I found recently (while speaking at the Abel conference in honour of this year’s laureate, Endre Szemerédi) that this particular argument is not as widely known among graph theory experts as I had thought, so I thought I would record it here.
For reasons of exposition, it is convenient to first establish a slightly weaker form of the lemma, in which one drops the hypothesis of equitability (but then has to weight the cells {V_i} by their magnitude when counting bad pairs):
Lemma 2 (Szemerédi regularity lemma, weakened variant) . Let {G = (V,E)} be a graph on {n} vertices, and let {\epsilon > 0}. Then there exists a partition {V = V_1 \cup \ldots \cup V_M} for some {M \leq M(\epsilon)} with the property that for all pairs {(i,j) \in \{1,\ldots,M\}^2} outside of an exceptional set {\Sigma}, one has
\displaystyle  | E(A,B) - d_{ij} |A| |B| | \ll \epsilon |V_i| |V_j| \ \ \ \ \ (1)
whenever {A \subset V_i, B \subset V_j}, for some real number {d_{ij}}, where {E(A,B) := |\{ (a,b) \in A \times B: \{a,b\} \in E \}|} is the number of edges between {A} and {B}. Furthermore, we have
\displaystyle  \sum_{(i,j) \in \Sigma} |V_i| |V_j| \ll \epsilon |V|^2. \ \ \ \ \ (2)
Let us now prove Lemma 2. We enumerate {V} (after relabeling) as {V = \{1,\ldots,n\}}. The adjacency matrix {T} of the graph {G} is then a self-adjoint {n \times n} matrix, and thus admits an eigenvalue decomposition
\displaystyle  T = \sum_{i=1}^n \lambda_i u_i^* u_i
for some orthonormal basis {u_1,\ldots,u_n} of {{\bf C}^n} and some eigenvalues {\lambda_1,\ldots,\lambda_n \in {\bf R}}, which we arrange in decreasing order of magnitude:
\displaystyle  |\lambda_1| \geq \ldots \geq |\lambda_n|.
We can compute the trace of {T^2} as
\displaystyle  \hbox{tr}(T^2) = \sum_{i=1}^n |\lambda_i|^2.
But we also have {\hbox{tr}(T^2) = 2|E| \leq n^2}, so
\displaystyle  \sum_{i=1}^n |\lambda_i|^2 \leq n^2. \ \ \ \ \ (3)
Among other things, this implies that
\displaystyle  |\lambda_i| \leq \frac{n}{\sqrt{i}} \ \ \ \ \ (4)
for all {i \geq 1}.
Let {F: {\bf N} \rightarrow {\bf N}} be a function (depending on {\epsilon}) to be chosen later, with {F(i) \geq i} for all {i}. Applying (3) and the pigeonhole principle (or the finite convergence principle, see this blog post), we can find {J \leq C(F,\epsilon)} such that
\displaystyle  \sum_{J \leq i < F(J)} |\lambda_i|^2 \leq \epsilon^3 n^2.
(Indeed, the bound on {J} is basically {F} iterated {1/\epsilon^3} times.) We can now split
\displaystyle  T = T_1 + T_2 + T_3, \ \ \ \ \ (5)
where {T_1} is the “structured” component
\displaystyle  T_1 := \sum_{i < J} \lambda_i u_i^* u_i, \ \ \ \ \ (6)
{T_2} is the “small” component
\displaystyle  T_2 := \sum_{J \leq i < F(J)} \lambda_i u_i^* u_i, \ \ \ \ \ (7)
and {T_3} is the “pseudorandom” component
\displaystyle  T_3 := \sum_{i > F(J)} \lambda_i u_i^* u_i. \ \ \ \ \ (8)
We now design a vertex partition to make {T_1} approximately constant on most cells. For each {i < J}, we partition {V} into {O_{J,\epsilon}(1)} cells on which {u_i} (viewed as a function from {V} to {{\bf C}}) only fluctuates by {O(\epsilon n^{-1/2} /J)}, plus an exceptional cell of size {O( \frac{\epsilon}{J} |V|)} coming from the values where {|u_i|} is excessively large (larger than {\sqrt{\frac{J}{\epsilon}} n^{-1/2}}). Combining all these partitions together, we can write {V = V_1 \cup \ldots \cup V_{M-1} \cup V_M} for some {M = O_{J,\epsilon}(1)}, where {V_M} has cardinality at most {\epsilon |V|}, and for all {1 \leq i \leq M-1}, the eigenfunctions {u_1,\ldots,u_{J-1}} all fluctuate by at most {O(\epsilon/J)}. In particular, if {1 \leq i,j \leq M-1}, then (by (4) and (6)) the entries of {T_1} fluctuate by at most {O(\epsilon)} on each block {V_i \times V_j}. If we let {d_{ij}} be the mean value of these entries on{V_i \times V_j}, we thus have
\displaystyle  1_B^* T_1 1_A = d_{ij} |A| |B| + O( \epsilon |V_i| |V_j| ) \ \ \ \ \ (9)
for any {1 \leq i,j \leq M-1} and {A \subset V_i, B \subset V_j}, where we view the indicator functions {1_A, 1_B} as column vectors of dimension {n}.
Next, we observe from (3) and (7) that {\hbox{tr} T_2^2 \leq \epsilon^3 n^2}. If we let {x_{ab}} be the coefficients of {T_2}, we thus have
\displaystyle  \sum_{a,b \in V} |x_{ab}|^2 \leq \epsilon^3 n^2
and hence by Markov’s inequality we have
\displaystyle  \sum_{a \in V_i} \sum_{b \in V_j} |x_{ab}|^2 \leq \epsilon^2 |V_i| |V_j| \ \ \ \ \ (10)
for all pairs {(i,j) \in \{1,\ldots,M-1\}^2} outside of an exceptional set {\Sigma_1} with
\displaystyle  \sum_{(i,j) \in \Sigma} |V_i| |V_j| \leq \epsilon |V|^2.
If {(i,j) \in \{1,\ldots,M-1\}^2} avoids {\Sigma}, we thus have
\displaystyle  1_B^* T_2 1_A = O( \epsilon |V_i| |V_j| ) \ \ \ \ \ (11)
for any {A \subset V_i, B \subset V_j}, by (10) and the Cauchy-Schwarz inequality.
Finally, to control {T_3} we see from (4) and (8) that {T_3} has an operator norm of at most {1/F(J)}. In particular, we have from the Cauchy-Schwarz inequality that
\displaystyle  1_B^* T_3 1_A = O( n^2 / F(J) ) \ \ \ \ \ (12)
for any {A, B \subset V}.
Let {\Sigma} be the set of all pairs {(i,j) \in \{1,\ldots,M\}^2} where either {(i,j) \in \Sigma_1}{i = M}{j=M}, or
\displaystyle  \min(|V_i|, |V_j|) \leq \frac{\epsilon}{M} n.
One easily verifies that (2) holds. If {(i,j) \in \{1,\ldots,M\}^2} is not in {\Sigma}, then by summing (9)(11)(12) and using (5), we see that
\displaystyle  1_B^* T 1_A = d_{ij} |A| |B| + O( \epsilon |V_i| |V_j| ) + O( n^2 / F(J) ) \ \ \ \ \ (13)
for all {A \subset V_i, B \subset V_j}. The left-hand side is just {E(A,B)}. As {(i,j) \not \in \Sigma}, we have
\displaystyle  |V_i|, |V_j| > \frac{\epsilon}{M} n
and so (since {M = O_{J,\epsilon}(1)})
\displaystyle  n^2 / F(J) \ll_{J,\epsilon} |V_i| |V_j| / F(J).
If we let {F} be a sufficiently rapidly growing function of {J} that depends on {\epsilon}, the second error term in (13) can be absorbed in the first, and (1) follows. This concludes the proof of Lemma 2.
To prove Lemma 1, one argues similarly (after modifying {\epsilon} as necessary), except that the initial partition {V_1,\ldots,V_M} of {V} constructed above needs to be subdivided further into equitable components (of size {\epsilon |V|/M+O(1)}), plus some remainder sets which can be aggregated into an exceptional component of size {O( \epsilon |V| )} (and which can then be redistributed amongst the other components to arrive at a truly equitable partition). We omit the details.
Remark 1 It is easy to verify that {F} needs to be growing exponentially in {J} in order for the above argument to work, which leads to tower-exponential bounds in the number of cells {M} in the partition. It was shown by Gowers that a tower-exponential bound is actually necessary here. By varying {F}, one basically obtains the strong regularity lemmafirst established by Alon, Fischer, Krivelevich, and Szegedy; in the opposite direction, setting {F(J) := J} essentially gives the weak regularity lemma of Frieze and Kannan.
Remark 2 If we specialise to a Cayley graph, in which {V = (V,+)} is a finite abelian group and {E = \{ (a,b): a-b \in A \}} for some (symmetric) subset {A} of {V}, then the eigenvectors are characters, and one essentially recovers the arithmetic regularity lemma of Green, in which the vertex partition classes {V_i} are given by Bohr sets (and one can then place additional regularity properties on these Bohr sets with some additional arguments). The components {T_1, T_2, T_3} of {T}, representing high, medium, and low eigenvalues of {T}, then become a decomposition associated to high, medium, and low Fourier coefficients of {A}.
Remark 3 The use of spectral theory here is parallel to the use of Fourier analysis to establish results such as Roth’s theorem on arithmetic progressions of length three. In analogy with this, one could view hypergraph regularity as being a sort of “higher order spectral theory”, although this spectral perspective is not as convenient as it is in the graph case.
Lars Hörmander, who made fundamental contributions to all areas of partial differential equations, but particularly in developing the analysis of variable-coefficient linear PDE, died last Sunday, aged 81.
I unfortunately never met Hörmander personally, but of course I encountered his work all the time while working in PDE. One of his major contributions to the subject was to systematically develop the calculus of Fourier integral operators (FIOs), which are a substantial generalisation of pseudodifferential operators and which can be used to (approximately) solve linear partial differential equations, or to transform such equations into a more convenient form. Roughly speaking, Fourier integral operators are to linear PDE ascanonical transformations are to Hamiltonian mechanics (and one can in fact view FIOs as a quantisation of a canonical transformation). They are a large class of transformations, for instance the Fourier transform, pseudodifferential operators, and smooth changes of the spatial variable are all examples of FIOs, and (as long as certain singular situations are avoided) the composition of two FIOs is again an FIO.
The full theory of FIOs is quite extensive, occupying the entire final volume ofHormander’s famous four-volume series “The Analysis of Linear Partial Differential Operators”. I am certainly not going to try to attempt to summarise it here, but I thought I would try to motivate how these operators arise when trying to transform functions. For simplicity we will work with functions {f \in L^2({\bf R}^n)} on a Euclidean domain {{\bf R}^n} (although FIOs can certainly be defined on more general smooth manifolds, and there is an extension of the theory that also works on manifolds with boundary). As this will be a heuristic discussion, we will ignore all the (technical, but important) issues of smoothness or convergence with regards to the functions, integrals and limits that appear below, and be rather vague with terms such as “decaying” or “concentrated”.
A function {f \in L^2({\bf R}^n)} can be viewed from many different perspectives (reflecting the variety of bases, or approximate bases, that the Hilbert space {L^2({\bf R}^n)} offers). Most directly, we have the physical space perspective, viewing {f} as a function {x \mapsto f(x)} of the physical variable {x \in {\bf R}^n}. In many cases, this function will be concentrated in some subregion {\Omega} of physical space. For instance, a gaussian wave packet
\displaystyle  f(x) = A e^{-(x-x_0)^2/\hbar} e^{i \xi_0 \cdot x/\hbar}, \ \ \ \ \ (1)
where {\hbar > 0}{A \in {\bf C}} and {x_0, \xi_0 \in {\bf R}^n} are parameters, would be physically concentrated in the ball {B(x_0,\sqrt{\hbar})}. Then we have the frequency space (or momentum space) perspective, viewing {f} now as a function {\xi \mapsto \hat f(\xi)} of the frequency variable {\xi \in {\bf R}^n}. For this discussion, it will be convenient to normalise the Fourier transform using a small constant {\hbar > 0} (which has the physical interpretation of Planck’s constant if one is doing quantum mechanics), thus
\displaystyle  \hat f(\xi) := \frac{1}{(2\pi \hbar)^{n/2}} \int_{\bf R} e^{-i\xi \cdot x/\hbar} f(x)\ dx.
For instance, for the gaussian wave packet (1), one has
\displaystyle  \hat f(\xi) = A e^{i\xi_0 \cdot x_0/\hbar} e^{-(\xi-\xi_0)^2/\hbar} e^{-i \xi \cdot x_0/\hbar},
and so we see that {f} is concentrated in frequency space in the ball {B(\xi_0,\sqrt{\hbar})}.
However, there is a third (but less rigorous) way to view a function {f} in {L^2({\bf R}^n)}, which is the phase space perspective in which one tries to view {f} as distributed simultaneously in physical space and in frequency space, thus being something like a measure on the phase space {T^* {\bf R}^n := \{ (x,\xi): x, \xi \in {\bf R}^n\}}. Thus, for instance, the function (1) should heuristically be concentrated on the region {B(x_0,\sqrt{\hbar}) \times B(\xi_0,\sqrt{\hbar})} in phase space. Unfortunately, due to the uncertainty principle, there is no completely satisfactory way to canonically and rigorously define what the “phase space portrait” of a function {f} should be. (For instance, the Wigner transform of {f}can be viewed as an attempt to describe the distribution of the {L^2} energy of {f}in phase space, except that this transform can take negative or even complex values; see Folland’s book for further discussion.) Still, it is a very useful heuristic to think of functions has having a phase space portrait, which is something like a non-negative measure on phase space that captures the distribution of functions in both space and frequency, albeit with some “quantum fuzziness” that shows up whenever one tries to inspect this measure at scales of physical space and frequency space that together violate the uncertainty principle. (The score of a piece of music is a good everyday example of a phase space portrait of a function, in this case a sound wave; here, the physical space is the time axis (the horizontal dimension of the score) and the frequency space is the vertical dimension. Here, the time and frequency scales involved are well above the uncertainty principle limit (a typical note lasts many hundreds of cycles, whereas the uncertainty principle kicks in at {O(1)} cycles) and so there is no obstruction here to musical notation being unambiguous.) Furthermore, if one takes certain asymptotic limits, one can recover a precise notion of a phase space portrait; for instance if one takes the semiclassical limit {\hbar \rightarrow 0} then, under certain circumstances, the phase space portrait converges to a well-defined classical probability measure on phase space; closely related to this is the high frequency limit of a fixed function, which among other things defines the wave front set of that function, which can be viewed as another asymptotic realisation of the phase space portrait concept.
If functions in {L^2({\bf R}^n)} can be viewed as a sort of distribution in phase space, then linear operators {T: L^2({\bf R}^n) \rightarrow L^2({\bf R}^n)} should be viewed as various transformations on such distributions on phase space. For instance, a pseudodifferential operator {a(X,D)} should correspond (as a zeroth approximation) to multiplying a phase space distribution by the symbol {a(x,\xi)}of that operator, as discussed in this previous blog post. Note that such operators only change the amplitude of the phase space distribution, but not the support of that distribution.
Now we turn to operators that alter the support of a phase space distribution, rather than the amplitude; we will focus on unitary operators to emphasise the amplitude preservation aspect. These will eventually be key examples of Fourier integral operators. A physical translation {Tf(x) := f(x-x_0)} should correspond to pushing forward the distribution by the transformation {(x,\xi) \mapsto (x+x_0,\xi)}, as can be seen by comparing the physical and frequency space supports of {Tf} with that of {f}. Similarly, a frequency modulation {Tf(x) := e^{i \xi_0 \cdot x/\hbar} f(x)} should correspond to the transformation {(x,\xi) \mapsto (x,\xi+\xi_0)}; a linear change of variables {Tf(x) := |\hbox{det} L|^{-1/2} f(L^{-1} x)}, where {L: {\bf R}^n \rightarrow {\bf R}^n} is an invertible linear transformation, should correspond to {(x,\xi) \mapsto (Lx, (L^*)^{-1} \xi)}; and finally, the Fourier transform {Tf(x) := \hat f(x)} should correspond to the transformation {(x,\xi) \mapsto (\xi,-x)}.
Based on these examples, one may hope that given any diffeomorphism {\Phi: T^* {\bf R}^n \rightarrow T^* {\bf R}^n} of phase space, one could associate some sort of unitary (or approximately unitary) operator {T_\Phi: L^2({\bf R}^n) \rightarrow L^2({\bf R}^n)}, which (heuristically, at least) pushes the phase space portrait of a function forward by {\Phi}. However, there is an obstruction to doing so, which can be explained as follows. If {T_\Phi} pushes phase space portraits by {\Phi}, and pseudodifferential operators {a(X,D)} multiply phase space portraits by {a}, then this suggests the intertwining relationship
\displaystyle  a(X,D) T_\Phi \approx T_\Phi (a \circ \Phi)(X,D),
and thus {(a \circ \Phi)(X,D)} is approximately conjugate to {a(X,D)}:
\displaystyle  (a \circ \Phi)(X,D) \approx T_\Phi^{-1} a(X,D) T_\Phi. \ \ \ \ \ (2)
The formalisation of this fact in the theory of Fourier integral operators is known as Egorov’s theorem, due to Yu Egorov (and not to be confused with the more widely known theorem of Dmitri Egorov in measure theory).
Applying commutators, we conclude the approximate conjugacy relationship
\displaystyle  \frac{1}{i\hbar} [(a \circ \Phi)(X,D), (b \circ \Phi)(X,D)] \approx T_\Phi^{-1} \frac{1}{i\hbar} [a(X,D), b(X,D)] T_\Phi.
Now, the pseudodifferential calculus (as discussed in this previous post) tells us (heuristically, at least) that
\displaystyle  \frac{1}{i\hbar} [a(X,D), b(X,D)] \approx \{ a, b \}(X,D)
and
\displaystyle  \frac{1}{i\hbar} [(a \circ \Phi)(X,D), (b \circ \Phi)(X,D)] \approx \{ a \circ \Phi, b \circ \Phi \}(X,D)
where {\{,\}} is the Poisson bracket. Comparing this with (2), we are then led to the compatibility condition
\displaystyle  \{ a \circ \Phi, b \circ \Phi \} \approx \{ a, b \} \circ \Phi,
thus {\Phi} needs to preserve (approximately, at least) the Poisson bracket, or equivalently {\Phi} needs to be a symplectomorphism (again, approximately at least).
Now suppose that {\Phi: T^* {\bf R}^n \rightarrow T^* {\bf R}^n} is a symplectomorphism. This is morally equivalent to the graph {\Sigma := \{ (z, \Phi(z)): z \in T^* {\bf R}^n \}} being aLagrangian submanifold of {T^* {\bf R}^n \times T^* {\bf R}^n} (where we give the second copy of phase space the negative {-\omega} of the usual symplectic form {\omega}, thus yielding {\omega \oplus -\omega} as the full symplectic form on {T^* {\bf R}^n \times T^* {\bf R}^n}; this is another instantiation of the closed graph theorem, as mentioned in this previous post. This graph is known as the canonical relation for the (putative) FIO that is associated to {\Phi}. To understand what it means for this graph to be Lagrangian, we coordinatise {T^* {\bf R}^n \times T^* {\bf R}^n} as {(x,\xi,y,\eta)} suppose temporarily that this graph was (locally, at least) a smooth graph in the {x} and {y} variables, thus
\displaystyle  \Sigma = \{ (x, F(x,y), y, G(x,y)): x, y \in {\bf R}^n \}
for some smooth functions {F, G: {\bf R}^n \rightarrow {\bf R}^n}. A brief computation shows that the Lagrangian property of {\Sigma} is then equivalent to the compatibility conditions
\displaystyle  \frac{\partial F_i}{\partial x_j} = \frac{\partial F_j}{\partial x_i}
\displaystyle  \frac{\partial G_i}{\partial y_j} = \frac{\partial G_j}{\partial y_i}
\displaystyle  \frac{\partial F_i}{\partial y_j} = - \frac{\partial G_j}{\partial x_i}
for {i,j=1,\ldots,n}, where {F_1,\ldots,F_n, G_1,\ldots,G_n} denote the components of {F,G}. Some Fourier analysis (or Hodge theory) lets us solve these equations as
\displaystyle  F_i = -\frac{\partial \phi}{\partial x_i}; \quad G_j = \frac{\partial \phi}{\partial y_j}
for some smooth potential function {\phi: {\bf R}^n \times {\bf R}^n \rightarrow {\bf R}}. Thus, we have parameterised our graph {\Sigma} as
\displaystyle  \Sigma = \{ (x, -\nabla_x \phi(x,y), y, \nabla_y \phi(x,y)): x,y \in {\bf R}^n \} \ \ \ \ \ (3)
so that {\Phi} maps {(x, -\nabla_x \phi(x,y))} to {(y, \nabla_y \phi(x,y))}.
A reasonable candidate for an operator associated to {\Phi} and {\Sigma} in this fashion is the oscillatory integral operator
\displaystyle  Tf(y) := \frac{1}{(2\pi \hbar)^{n/2}} \int_{{\bf R}^n} e^{i \phi(x,y)/\hbar} a(x,y) f(x)\ dx \ \ \ \ \ (4)
for some smooth amplitude function {a} (note that the Fourier transform is the special case when {a=1} and {\phi(x,y)=xy}, which helps explain the genesis of the term “Fourier integral operator”). Indeed, if one computes an inner product{\int_{{\bf R}^n} Tf(y) \overline{g(y)}\ dy} for gaussian wave packets {f, g} of the form (1) and localised in phase space near {(x_0,\xi_0), (y_0,\eta_0)} respectively, then a Taylor expansion of {\phi} around {(x_0,y_0)}, followed by a stationary phase computation, shows (again heuristically, and assuming {\phi} is suitably non-degenerate) that {T}has (3) as its canonical relation. (Furthermore, a refinement of this stationary phase calculation suggests that if {a} is normalised to be the half-density {|\det \nabla_x \nabla_y \phi|^{1/2}}, then {T} should be approximately unitary.) As such, we view(4) as an example of a Fourier integral operator (assuming various smoothness and non-degeneracy hypotheses on the phase {\phi} and amplitude {a}which we do not detail here).
Of course, it may be the case that {\Sigma} is not a graph in the {x,y} coordinates (for instance, the key examples of translation, modulation, and dilation are not of this form), but then it is often a graph in some other pair of coordinates, such as {\xi,y}. In that case one can compose the oscillatory integral construction given above with a Fourier transform, giving another class of FIOs of the form
\displaystyle  Tf(y) := \frac{1}{(2\pi \hbar)^{n/2}} \int_{{\bf R}^n} e^{i \phi(\xi,y)/\hbar} a(\xi,y) \hat f(\xi)\ d\xi. \ \ \ \ \ (5)
This class of FIOs covers many important cases; for instance, the translation, modulation, and dilation operators considered earlier can be written in this form after some Fourier analysis. Another typical example is the half-wave propagator {T := e^{it \sqrt{-\Delta}}} for some time {t \in {\bf R}}, which can be written in the form
\displaystyle  Tf(y) = \frac{1}{(2\pi \hbar)^{n/2}} \int_{{\bf R}^n} e^{i (\xi \cdot y + t |\xi|)/\hbar} a(\xi,y) \hat f(\xi)\ d\xi.
This corresponds to the phase space transformation {(x,\xi) \mapsto (x+t|\xi|, \xi)}, which can be viewed as the classical propagator associated to the “quantum” propagator {e^{it\sqrt{-\Delta}}}. More generally, propagators for linear Hamiltonian partial differential equations can often be expressed (at least approximately) by Fourier integral operators corresponding to the propagator of the associatedclassical Hamiltonian flow associated to the symbol of the Hamiltonian operator{H}; this leads to an important mathematical formalisation of thecorrespondence principle between quantum mechanics and classical mechanics, that is one of the foundations of microlocal analysis and which was extensively developed in Hörmander’s work. (More recently, numerically stable versions of this theory have been developed to allow for rapid and accurate numerical solutions to various linear PDE, for instance through Emmanuel Candés’ theory of curvelets, so the theory that Hörmander built now has some quite significant practical applications in areas such as geology.)
In some cases, the canonical relation {\Sigma} may have some singularities (such as fold singularities) which prevent it from being written as graphs in the previous senses, but the theory for defining FIOs even in these cases, and in developing their calculus, is now well established, in large part due to the foundational work of Hörmander.
I’ve just uploaded to the arXiv my joint paper with Vitaly Bergelson, “Multiple recurrence in quasirandom groups“, which is submitted to Geom. Func. Anal.. This paper builds upon a paper of Gowers in which he introduced the concept of a quasirandom group, and established some mixing (or recurrence) properties of such groups. A {D}-quasirandom group is a finite group with no non-trivial unitary representations of dimension at most {D}. We will informally refer to a “quasirandom group” as a {D}-quasirandom group with the quasirandomness parameter {D} large (more formally, one can work with asequence of {D_n}-quasirandom groups with {D_n} going to infinity). A typical example of a quasirandom group is {SL_2(F_p)} where {p} is a large prime. Quasirandom groups are discussed in depth in this blog post. One of the key properties of quasirandom groups established in Gowers’ paper is the following “weak mixing” property: if {A, B} are subsets of {G}, then for “almost all” {g \in G}, one has
\displaystyle  \mu( A \cap gB ) \approx \mu(A) \mu(B) \ \ \ \ \ (1)
where {\mu(A) := |A|/|G|} denotes the density of {A} in {G}. Here, we use {x \approx y}to informally represent an estimate of the form {x=y+o(1)} (where {o(1)} is a quantity that goes to zero when the quasirandomness parameter {D} goes to infinity), and “almost all {g \in G}” denotes “for all {g} in a subset of {G} of density {1-o(1)}“. As a corollary, if {A,B,C} have positive density in {G} (by which we mean that {\mu(A)} is bounded away from zero, uniformly in the quasirandomness parameter {D}, and similarly for {B,C}), then (if the quasirandomness parameter {D} is sufficiently large) we can find elements {g, x \in G} such that {g \in A}{x \in B},{gx \in C}. In fact we can find approximately {\mu(A)\mu(B)\mu(C) |G|^2} such pairs {(g,x)}. To put it another way: if we choose {g,x} uniformly and independently at random from {G}, then the events {g \in A}{x \in B}{gx \in C} are approximately independent (thus the random variable {(g,x,gx) \in G^3} resembles a uniformly distributed random variable on {G^3} in some weak sense). One can also express this mixing property in integral form as
\displaystyle  \int_G \int_G f_1(g) f_2(x) f_3(gx)\ d\mu(g) d\mu(x) \approx (\int_G f_1\ d\mu) (\int_G f_2\ d\mu) (\int_G f_3\ d\mu)
for any bounded functions {f_1,f_2,f_3: G \rightarrow {\bf R}}. (Of course, with {G} being finite, one could replace the integrals here by finite averages if desired.) Or in probabilistic language, we have
\displaystyle  \mathop{\bf E} f_1(g) f_2(x) f_3(gx) \approx \mathop{\bf E} f_1(x_1) f_2(x_2) f_3(x_3)
where {g, x, x_1, x_2, x_3} are drawn uniformly and independently at random from {G}.
As observed in Gowers’ paper, one can iterate this observation to find “parallelopipeds” of any given dimension in dense subsets of {G}. For instance, applying (1) with {A,B,C} replaced by {A \cap hB}{C \cap hD}, and {E \cap hF} one can assert (after some relabeling) that for {g,h,x} chosen uniformly and independently at random from {G}, the events {g \in A}{h \in B}{gh \in C}{x \in D}{gx \in E}{hx \in F}{ghx \in H} are approximately independent whenever {A,B,C,D,E,F,H} are dense subsets of {G}; thus the tuple {(g,h,gh,x,gh,hx,ghx)} resebles a uniformly distributed random variable in {G^7} in some weak sense.
However, there are other tuples for which the above iteration argument does not seem to apply. One of the simplest tuples in this vein is the tuple {(g, x, xg, gx)} in {G^4}, when {g, x} are drawn uniformly at random from a quasirandom group {G}. Here, one does not expect the tuple to behave as if it were uniformly distributed in {G^4}, because there is an obvious constraint connecting the last two components {gx, xg} of this tuple: they must lie in the same conjugacy class! In particular, if {A} is a subset of {G} that is the union of conjugacy classes, then the events {gx \in A}{xg \in A} are perfectly correlated, so that {\mu( gx \in A, xg \in A)} is equal to {\mu(A)} rather than {\mu(A)^2}. Our main result, though, is that in a quasirandom group, this is (approximately) the onlyconstraint on the tuple. More precisely, we have
Theorem 1 Let {G} be a {D}-quasirandom group, and let {g, x} be drawn uniformly at random from {G}. Then for any {f_1,f_2,f_3,f_4: G \rightarrow [-1,1]}, we have
\displaystyle  \mathop{\bf E} f_1(g) f_2(x) f_3(gx) f_4(xg) = \mathop{\bf E} f_1(x_1) f_2(x_2) f_3(x_3) f_4(x_4) + o(1)
where {o(1)} goes to zero as {D \rightarrow \infty}{x_1,x_2,x_3} are drawn uniformly and independently at random from {G}, and {x_4} is drawn uniformly at random from the conjugates of {x_3} for each fixed choice of {x_1,x_2,x_3}.
This is the probabilistic formulation of the above theorem; one can also phrase the theorem in other formulations (such as an integral formulation), and this is detailed in the paper. This theorem leads to a number of recurrence results; for instance, as a corollary of this result, we have
\displaystyle  \mu(A) \mu(B)^2 - o(1) \leq \mu( A \cap gB \cap Bg ) \leq \mu(A) \mu(B) + o(1)
for almost all {g \in G}, and any dense subsets {A, B} of {G}; the lower and upper bounds are sharp, with the lower bound being attained when {B} is randomly distributed, and the upper bound when {B} is conjugation-invariant.
To me, the more interesting thing here is not the result itself, but how it is proven. Vitaly and I were not able to find a purely finitary way to establish this mixing theorem. Instead, we had to first use the machinery of ultraproducts (as discussed in this previous post) to convert the finitary statement about a quasirandom group to an infinitary statement about a type of infinite group which we call an ultra quasirandom group (basically, an ultraproduct of increasingly quasirandom finite groups). This is analogous to how the Furstenberg correspondence principle is used to convert a finitary combinatorial problem into an infinitary ergodic theory problem.
Ultra quasirandom groups come equipped with a finite, countably additive measure known as Loeb measure {\mu_G}, which is very analogous to the Haar measure of a compact group, except that in the case of ultra quasirandom groups one does not quite have a topological structure that would give compactness. Instead, one has a slightly weaker structure known as a {\sigma}-topology, which is like a topology except that open sets are only closed under countable unions rather than arbitrary ones. There are some interesting measure-theoretic and topological issues regarding the distinction between topologies and {\sigma}-topologies (and between Haar measure and Loeb measure), but for this post it is perhaps best to gloss over these issues and pretend that ultra quasirandom groups {G} come with a Haar measure. One can then recast Theorem 1 as a mixing theorem for the left and right actions of the ultra approximate group {G} on itself, which roughly speaking is the assertion that
\displaystyle  \int_G f_1(x) L_g f_2(x) L_g R_g f_3(x)\ d\mu_G(x) \approx 0 \ \ \ \ \ (2)
for “almost all” {g \in G}, if {f_1, f_2, f_3} are bounded measurable functions on {G}, with {f_3} having zero mean on all conjugacy classes of {G}, where {L_g, R_g} are the left and right translation operators
\displaystyle  L_g f(x) := f(g^{-1} x); \quad R_g f(x) := f(xg).
To establish this mixing theorem, we use the machinery of idempotent ultrafilters, which is a particularly useful tool for understanding the ergodic theory of actions of countable groups {G} that need not be amenable; in the non-amenable setting the classical ergodic averages do not make much sense, but ultrafilter-based averages are still available. To oversimplify substantially, the idempotent ultrafilter arguments let one establish mixing estimates of the form (2) for “many” elements {g} of an infinite-dimensional parallelopiped known as an IP system (provided that the actions {L_g,R_g} of this IP system obey some technical mixing hypotheses, but let’s ignore that for sake of this discussion). The claim then follows by using the quasirandomness hypothesis to show that if the estimate (2) failed for a large set of {g \in G}, then this large set would then contain an IP system, contradicting the previous claim.
Idempotent ultrafilters are an extremely infinitary type of mathematical object (one has to use Zorn’s lemma no fewer than three times just to construct one of these objects!). So it is quite remarkable that they can be used to establish a finitary theorem such as Theorem 1, though as is often the case with such infinitary arguments, one gets absolutely no quantitative control whatsoever on the error terms {o(1)} appearing in that theorem. (It is also mildly amusing to note that our arguments involve the use of ultrafilters in two completely different ways: firstly in order to set up the ultraproduct that converts the finitary mixing problem to an infinitary one, and secondly to solve the infinitary mixing problem. Despite some superficial similarities, there appear to be no substantial commonalities between these two usages of ultrafilters.) There is already a fair amount of literature on using idempotent ultrafilter methods in infinitary ergodic theory, and perhaps by further development of ultraproduct correspondence principles, one can use such methods to obtain further finitary consequences (although the state of the art for idempotent ultrafilter ergodic theory has not advanced much beyond the analysis of two commuting shifts {L_g, R_g} currently, which is the main reason why our arguments only handle the pattern {(g,x,xg,gx)} and not more sophisticated patterns).
We also have some miscellaneous other results in the paper. It turns out that by using the triangle removal lemma from graph theory, one can obtain a recurrence result that asserts that whenever {A} is a dense subset of a finite group {G} (not necessarily quasirandom), then there are {\gg |G|^2} pairs {(x,g)}such that {x, gx, xg} all lie in {A}. Using a hypergraph generalisation of the triangle removal lemma known as the hypergraph removal lemma, one can obtain more complicated versions of this statement; for instance, if {A} is a dense subset of {G^2}, then one can find {\gg |G|^2} triples {(x,y,g)} such that {(x,y), (gx, y), (gx, gy), (gxg^{-1}, gyg^{-1})} all lie in {A}. But the method is tailored to the specific types of patterns given here, and we do not have a general method for obtaining recurrence or mixing properties for arbitrary patterns of words in some finite alphabet such as {g,x,y}.
We also give some properties of a model example of an ultra quasirandom group, namely the ultraproduct {SL_2(F)} of {SL_2(F_{p_n})} where {p_n} is a sequence of primes going off to infinity. Thanks to the substantial recent progress (by Helfgott, Bourgain, Gamburd, Breuillard, and others) on understanding the expansion properties of the finite groups {SL_2(F_{p_n})}, we have a fair amount of knowledge on the ultraproduct {SL_2(F)} as well; for instance any two elements of {SL_2(F)} will almost surely generate a spectral gap. We don’t have any direct application of this particular ultra quasirandom group, but it might be interesting to study it further.
Given a function {f: X \rightarrow Y} between two sets {X, Y}, we can form the graph
\displaystyle  \Sigma := \{ (x,f(x)): x\in X \},
which is a subset of the Cartesian product {X \times Y}.
There are a number of “closed graph theorems” in mathematics which relate the regularity properties of the function {f} with the closure properties of the graph {\Sigma}, assuming some “completeness” properties of the domain {X} and range {Y}. The most famous of these is the closed graph theorem from functional analysis, which I phrase as follows:
Theorem 1 (Closed graph theorem (functional analysis)) Let {X, Y}be complete normed vector spaces over the reals (i.e. Banach spaces). Then a function {f: X \rightarrow Y} is a continuous linear transformation if and only if the graph {\Sigma := \{ (x,f(x)): x \in X \}} is both linearly closed (i.e. it is a linear subspace of {X \times Y}) and topologically closed (i.e. closed in the product topology of {X \times Y}).
I like to think of this theorem as linking together qualitative and quantitative notions of regularity preservation properties of an operator {f}; see this blog post for further discussion.
The theorem is equivalent to the assertion that any continuous linear bijection {f: X \rightarrow Y} from one Banach space to another is necessarily an isomorphism in the sense that the inverse map is also continuous and linear. Indeed, to see that this claim implies the closed graph theorem, one applies it to the projection from {\Sigma} to {X}, which is a continuous linear bijection; conversely, to deduce this claim from the closed graph theorem, observe that the graph of the inverse {f^{-1}} is the reflection of the graph of {f}. As such, the closed graph theorem is a corollary of the open mapping theorem, which asserts that any continuous linear surjection from one Banach space to another is open. (Conversely, one can deduce the open mapping theorem from the closed graph theorem by quotienting out the kernel of the continuous surjection to get a bijection.)
It turns out that there is a closed graph theorem (or equivalent reformulations of that theorem, such as an assertion that bijective morphisms between sufficiently “complete” objects are necessarily isomorphisms, or as an open mapping theorem) in many other categories in mathematics as well. Here are some easy ones:
Theorem 2 (Closed graph theorem (linear algebra)) Let {X, Y} be vector spaces over a field {k}. Then a function {f: X \rightarrow Y} is a linear transformation if and only if the graph {\Sigma := \{ (x,f(x)): x \in X \}} is linearly closed.
Theorem 3 (Closed graph theorem (group theory)) Let {X, Y} be groups. Then a function {f: X \rightarrow Y} is a group homomorphism if and only if the graph {\Sigma := \{ (x,f(x)): x \in X \}} is closed under the group operations (i.e. it is a subgroup of {X \times Y}).
Theorem 4 (Closed graph theorem (order theory)) Let {X, Y} be totally ordered sets. Then a function {f: X \rightarrow Y} is monotone increasing if and only if the graph {\Sigma := \{ (x,f(x)): x \in X \}} is totally ordered (using the product order on {X \times Y}).
Remark 1 Similar results to the above three theorems (with similarly easy proofs) hold for other algebraic structures, such as rings (using the usual product of rings), modulesalgebras, or Lie algebrasgroupoids, or even categories (a map between categories is a functor iff its graph is again a category). (ADDED IN VIEW OF COMMENTS: further examples include affine spaces and {G}-sets (sets with an action of a given group {G}).) There are also various approximate versions of this theorem that are useful in arithmetic combinatorics, that relate the property of a map {f} being an “approximate homomorphism” in some sense with its graph being an “approximate group” in some sense. This is particularly useful for this subfield of mathematics because there are currently more theorems about approximate groups than about approximate homomorphisms, so that one can profitably use closed graph theorems to transfer results about the former to results about the latter.
A slightly more sophisticated result in the same vein:
Theorem 5 (Closed graph theorem (point set topology)) Let {X, Y}be compact Hausdorff spaces. Then a function {f: X \rightarrow Y} is continuous if and only if the graph {\Sigma := \{ (x,f(x)): x \in X \}} is topologically closed.
Indeed, the “only if” direction is easy, while for the “if” direction, note that if {\Sigma}is a closed subset of {X \times Y}, then it is compact Hausdorff, and the projection map from {\Sigma} to {X} is then a bijective continuous map between compact Hausdorff spaces, which is then closed, thus open, and hence a homeomorphism, giving the claim.
Note that the compactness hypothesis is necessary: for instance, the function {f: {\bf R} \rightarrow {\bf R}} defined by {f(x) := 1/x} for {x \neq 0} and {f(0)=0} for {x=0} is a function which has a closed graph, but is discontinuous.
A similar result (but relying on a much deeper theorem) is available in algebraic geometry, as I learned after asking this MathOverflow question:
Theorem 6 (Closed graph theorem (algebraic geometry)) Let {X, Y} be normal projective varieties over an algebraically closed field {k}of characteristic zero. Then a function {f: X \rightarrow Y} is a regular map if and only if the graph {\Sigma := \{ (x,f(x)): x \in X \}} is Zariski-closed.
Proof: (Sketch) For the only if direction, note that the map {x \mapsto (x,f(x))} is a regular map from the projective variety {X} to the projective variety {X \times Y} and is thus a projective morphism, hence is proper. In particular, the image {\Sigma} of {X}under this map is Zariski-closed.
Conversely, if {\Sigma} is Zariski-closed, then it is also a projective variety, and the projection {(x,y) \mapsto x} is a projective morphism from {\Sigma} to {X}, which is clearlyquasi-finite; by the characteristic zero hypothesis, it is also separated. Applying (Grothendieck’s form of) Zariski’s main theorem, this projection is the composition of an open immersion and a finite map. As projective varieties arecomplete, the open immersion is an isomorphism, and so the projection from {\Sigma}to {X} is finite. Being injective and separable, the degree of this finite map must be one, and hence {k(\Sigma)} and {k(X)} are isomorphic, hence (by normality of {X}{k[\Sigma]} is contained in (the image of) {k[X]}, which makes the map from {X} to {\Sigma}regular, which makes {f} regular. \Box
The counterexample of the map {f: k \rightarrow k} given by {f(x) := 1/x} for {x \neq 0}and {f(0) := 0} demonstrates why the projective hypothesis is necessary. The necessity of the normality condition (or more precisely, a weak normality condition) is demonstrated by (the projective version of) the map {(t^2,t^3) \mapsto t}from the cusipdal curve {\{ (t^2,t^3): t \in k \}} to {k}. (If one restricts attention to smooth varieties, though, normality becomes automatic.) The necessity of characteristic zero is demonstrated by (the projective version of) the inverse of the Frobenius map {x \mapsto x^p} on a field {k} of characteristic {p}.
There are also a number of closed graph theorems for topological groups, of which the following is typical (see Exercise 3 of these previous blog notes):
Theorem 7 (Closed graph theorem (topological group theory))Let {X, Y} be {\sigma}-compact, locally compact Hausdorff groups. Then a function {X \rightarrow Y} is a continuous homomorphism if and only if the graph {\Sigma := \{ (x,f(x)): x \in X \}} is both group-theoretically closed and topologically closed.
The hypotheses of being {\sigma}-compact, locally compact, and Hausdorff can be relaxed somewhat, but I doubt that they can be eliminated entirely (though I do not have a ready counterexample for this).
In several complex variables, it is a classical theorem (see e.g. Lemma 4 of this blog post) that a holomorphic function from a domain in {{\bf C}^n} to {{\bf C}^n} is locally injective if and only if it is a local diffeomorphism (i.e. its derivative is everywhere non-singular). This leads to a closed graph theorem for complex manifolds:
Theorem 8 (Closed graph theorem (complex manifolds)) Let {X, Y} be complex manifolds. Then a function {f: X \rightarrow Y} is holomorphic if and only if the graph {\Sigma := \{ (x,f(x)): x \in X \}} is a complex manifold (using the complex structure inherited from {X \times Y}) of the same dimension as {X}.
Indeed, one applies the previous observation to the projection from {\Sigma} to {X}. The dimension requirement is needed, as can be seen from the example of the map {f: {\bf C} \rightarrow {\bf C}} defined by {f(z) =1/z} for {z \neq 0} and {f(0)=0}.
(ADDED LATER:) There is a real analogue to the above theorem:
Theorem 9 (Closed graph theorem (real manifolds)) Let {X, Y} be real manifolds. Then a function {f: X \rightarrow Y} is continuous if and only if the graph {\Sigma := \{ (x,f(x)): x \in X \}} is a real manifold of the same dimension as {X}.
This theorem can be proven by applying invariance of domain (discussed in this previous post) to the projection of {\Sigma} to {X}, to show that it is open if {\Sigma} has the same dimension as {X}.
Note though that the analogous claim for smooth real manifolds fails: the function {f: {\bf R} \rightarrow {\bf R}} defined by {f(x) := x^{1/3}} has a smooth graph, but is not itself smooth.
(ADDED YET LATER:) Here is an easy closed graph theorem in the symplectic category:
Theorem 10 (Closed graph theorem (symplectic geometry)) Let {X = (X,\omega_X)} and {Y = (Y,\omega_Y)} be smooth symplectic manifolds of the same dimension. Then a smooth map {f: X \rightarrow Y} is a symplectic morphism (i.e. {f^* \omega_Y = \omega_X}) if and only if the graph {\Sigma := \{(x,f(x)): x \in X \}} is a Lagrangian submanifold of {X \times Y}with the symplectic form {\omega_X \oplus -\omega_Y}.
In view of the symplectic rigidity phenomenon, it is likely that the smoothness hypotheses on {f,X,Y} can be relaxed substantially, but I will not try to formulate such a result here.
There are presumably many further examples of closed graph theorems (or closely related theorems, such as criteria for inverting a morphism, or open mapping type theorems) throughout mathematics; I would be interested to know of further examples.
\Box
I recently finished the first draft of the last of my books based on my 2011 blog posts (and also my Google buzzes and Google+ posts from that year), entitled “Spending symmetry“.    The PDF of this draft is available here.  This is again a rather  assorted (and lightly edited) collection of posts (and buzzes, and Google+ posts), though concentrating in the areas of analysis (both standard and nonstandard), logic, and geometry.   As always, comments and corrections are welcome.
[Once again, some advertising on behalf of my department, following on a similar announcement in the previous three years.]
Two years ago, the UCLA mathematics department launched a scholarship opportunity for entering freshman students with exceptional background and promise in mathematics. We have offered one scholarship every year, but this year due to an additional source of funding, we will also be able to offer an additional scholarship for California residents.The UCLA Math Undergraduate Merit Scholarship provides for full tuition, and a room and board allowance for 4 years. In addition, scholarship recipients follow an individualized accelerated program of study, as determined after consultation with UCLA faculty.   The program of study leads to a Masters degree in Mathematics in four years.         for commet go to ghani int.com

More information and an application form for the scholarship can be found on the web at: