Terrence Tao has made some deep observations on why the regularity of three dimensional Navier-Stokes is such a hard problem. Although he has gone on to many other equally interesting topics, I remain fascinated by his main point there: that Navier-Stokes is supercritical. The nonlinearities become stronger at small distance scales, making it impossible to know (using present techniques) whether solutions remain smooth for all time. Thus, it is crucial to understand the scale dependence of non-linearities in fluid mechanics.
The difficult problems of the field I do understand, Quantum Field Theory, also arise when the strength of the non-linearities increase at short distances. Many ideas from quantum field theory have already been applied to fluid mechanics, most notably by Polyakov.
The best behaved quantum field theories are `asymptotically freeâ€™ : the interactions (nonlinearities) decrease at short distances so that they tends to a free ( linear) theory. Renormalization theory, which is the systematic study of such short distance limits, is one of the deepest ideas ever to appear in physics.
Just to illustrate the importance that the physics community attaches to renormalization theory, about ten Nobel Prizes have gone to the people who developed these ideas. Tomonaga, Schwinger and Feynman for early development of Quantum Electrodynamics. Anderson and Wilson for the next stage that led to a revolution in the theory of critical phenomena. â€˜t Hooft and Veltmann for renormalizing Yang-Mills theories. Gross, Politzer and Wilczeck for the discovery of asymptotic freedom of Yang-Mills theories. There are several more for work in related areas.
Experience from quantum field theory suggests that we must first replace the Navier-Stokes equations with a `regularized’ version, in which there is a short distance cutoff. These equations are not going to PDEs, but some kind of integro-differential equations. It should be possible to understand the regularity of this cutoff version with existing methods. As pointed out by Tao, the problem of establishing is now shifted to studying the limit as this cutoff goes to zero. This is the step that is analogous to renormalization in quantum field theory. The ideas of quantum field theory suggests that a cutoff that preserves rotation invariance is needed to have a sensible `renormalization’ theory. Such a cutoff does not exist at the moment, and could be one of the (many) difficulties that we have to overcome.
To be more specific, consider the Fourier transform of a function in Euclidean space:
The wavenumber also takes values in Euclidean space, which is naturally thought of as the dual of the original space. If the function is smooth its Fourier transform will decay faster than any power in the dual space.
If we consider instead functions on a lattice the Fourier transform is a function on the torus . In the language of solid state physics, the fundamental domain of this torus is the `Brillouin zone’ of momenta of an electron in a periodic potential. Because there is a smallest possible length ( the distance between nearest neighbors in the lattice) there is a largest possible momentum (half the diameter of the Brillouin zone).
Thus there is a reciprocal relation between the smallest distance allowed in space and the largest allowed wavenumber. It is analogous to the uncertainty principle of quantum mechanics. Indeed, it is the uncertainty principle, once it is accepted that particles are represented by waves.
When studying a partial differential equation it is often useful to impose such a smallest possible length scale, at the cost of introducing some non-locality in the problem. This happens if we replace space by a lattice to discretize the PDE to solve it numerically. As noted above, this imposes a limit on the magnitude of the largest possible wavenumber. Following the jargon of QFT, let us call this cutoff procedure `regularizationâ€™.
Unfortunately the lattice method is not always the best regularization, as it breaks rotation invariance. Also, replacing space by a lattice of points introduces a lack of smoothness of functions. Consequently, numerical methods of solving PDEs suffer from spurious instabilities.
Is it possible to introduce a smallest possible length in space without breaking rotation invariance, and while maintaining smoothness of functions in space? Some proposals of this kind have appeared in theories of quantum gravity (where the symmetry of interest is Lorentz invariance rather than rotation invariance). The price we pay for this is a kind of fuzziness in space, where its co-ordinates become non-commutative. These techniques could be useful in the theory of PDEs and QFT; and also of practical use in solving PDEs numerically.
There is some earlier work on two dimensional fluid mechanics ( Zeitlin, Abarbanel and others) where this has been accomplished. But this is just a `toy modelâ€™. Although there are some situations where it is possible to limit fluid flow to two dimensions, the vast majority of phenomena of interest are in three dimensions. This was reinforced to me recently by Abarbanel. A fundamental phenomenon (`cascadeâ€™) is that information flows into large structures in the fluid from small distance scales (causing apparent randomness of the large scale degrees of freedom) while energy flows into small scales (dissipation due to turbulence and viscosity).
While removing the cutoff is a great mathematical challenge, the cutoff theory itself could of some interest in physics. After all, the equations of fluid mechanics are an approximation valid for an average of a large number molecules. A `fuzzy’ version of fluid mechanics would describe even larger scale motion, which averages over a fluid elements. Such a `mesoscopic’ theory may be what we need to understand many physical phenomena, such as the stability of large vortices.
Computational Fluid Dynamics is important to many engineering applications from weather prediction to the design of aircraft. Typically ( see the book by Patenkar), space is divided into a finite number of cells. The PDEs are turned into finite difference equations that are solved numerically. If the size of the cell can be made small enough this can give a good approximation to the real flow. However, the number of cells is limited by the memory of the computer. If the region of space is large the size of a cell can be too large. For example in weather prediction, a cell is several kilometers in size. This means not only that you miss phenomena within such cells, but also that predictions are limited in time. Given enough time the small scale will affect the large scale motion. In the case of the atmosphere the limit is about ten days beyond which predictions of the weather become unreliable with even the largest computers.
Thus, a method that imposes a smallest possible length, and a largest possible wavenumber, without breaking symmetries could help us in mathematical, physical and engineering approaches to fluid mechanics.
[Update May 30 2007] I have another couple of posts on this subject.
[Update May 29 2007] In the paper it is proved rigorously that solutions of fuzzy hydrodynamics on a torus do tend to solutions of Euler equations, in the limit as the regularization is removed. Thus the problem that M. raises in his comments does not appear at least in two dimensions. What happens in three dimensions is of course an open problem.
[Update May 16 2007] I found a nice page on Turbulence, with many references, by Cosma Shalizi .
[Update May 16 2007] I have posted a paper developing these ideas, arxiv:0705.2139.