Helmholtz decomposition

In physics and mathematics, the Helmholtz decomposition theorem or the fundamental theorem of vector calculus[1][2][3][4][5][6][7] states that any sufficiently smooth, rapidly decaying vector field in three dimensions can be resolved into the sum of an irrotational (curl-free) vector field and a solenoidal (divergence-free) vector field. It is named after Hermann von Helmholtz.

Definition

edit

For a vector field defined on a domain , a Helmholtz decomposition is a pair of vector fields and such that: Here, is a scalar potential, is its gradient, and is the divergence of the vector field . The irrotational vector field is called a gradient field and is called a solenoidal field or rotation field. This decomposition does not exist for all vector fields and is not unique.[8]

History

edit

The Helmholtz decomposition in three dimensions was first described in 1849[9] by George Gabriel Stokes for a theory of diffraction. Hermann von Helmholtz published his paper on some hydrodynamic basic equations in 1858,[10][11] which was part of his research on the Helmholtz's theorems describing the motion of fluid in the vicinity of vortex lines.[11] Their derivation required the vector fields to decay sufficiently fast at infinity. Later, this condition could be relaxed, and the Helmholtz decomposition could be extended to higher dimensions.[8][12][13] For riemannian manifolds, the Helmholtz-Hodge decomposition using differential geometry and tensor calculus was derived.[8][11][14][15]

The decomposition has become an important tool for many problems in theoretical physics,[11][14] but has also found applications in animation, computer vision as well as robotics.[15]

Three-dimensional space

edit

Many physics textbooks restrict the Helmholtz decomposition to the three-dimensional space and limit its application to vector fields that decay sufficiently fast at infinity or to bump function that are defined on a bounded domain. Then, a vector potential can be defined, such that the rotation field is given by , using the curl of a vector field.[16]

Let be a vector field on a bounded domain , which is twice continuously differentiable inside , and let be the surface that encloses the domain . Then can be decomposed into a curl-free component and a divergence-free component as follows:[17]

where

and is the nabla operator with respect to , not .

If and is therefore unbounded, and vanishes faster than as , then one has[18]

This holds in particular if is twice continuously differentiable in and of bounded support.

Derivation

edit
Proof

Suppose we have a vector function of which we know the curl, , and the divergence, , in the domain and the fields on the boundary. Writing the function using delta function in the form where is the Laplace operator, we have

where we have used the definition of the vector Laplacian:

differentiation/integration with respect to by and in the last line, linearity of function arguments:

Then using the vectorial identities

we get

Thanks to the divergence theorem the equation can be rewritten as

with outward surface normal .

Defining

we finally obtain

Solution space

edit

If is a Helmholtz decomposition of , then is another decomposition if, and only if,

and
where
  • is a harmonic scalar field,
  • is a vector field which fulfills
  • is a scalar field.

Proof: Set and . According to the definitionof the Helmholtz decomposition, the condition is equivalent to

.

Taking the divergence of each member of this equation yields , hence is harmonic.

Conversely, given any harmonic function , is solenoidal since

Thus, according to the above section, there exists a vector field such that .

If is another such vector field,then fulfills , hence for some scalar field .

Fields with prescribed divergence and curl

edit

The term "Helmholtz theorem" can also refer to the following. Let C be a solenoidal vector field and d a scalar field on R3 which are sufficiently smooth and which vanish faster than 1/r2 at infinity. Then there exists a vector field F such that

if additionally the vector field F vanishes as r → ∞, then F is unique.[18]

In other words, a vector field can be constructed with both a specified divergence and a specified curl, and if it also vanishes at infinity, it is uniquely specified by its divergence and curl. This theorem is of great importance in electrostatics, since Maxwell's equations for the electric and magnetic fields in the static case are of exactly this type.[18] The proof is by a construction generalizing the one given above: we set

where represents the Newtonian potential operator. (When acting on a vector field, such as ∇ × F, it is defined to act on each component.)

Weak formulation

edit

The Helmholtz decomposition can be generalized by reducing the regularity assumptions (the need for the existence of strong derivatives). Suppose Ω is a bounded, simply-connected, Lipschitz domain. Every square-integrable vector field u ∈ (L2(Ω))3 has an orthogonal decomposition:[19][20][21]

where φ is in the Sobolev space H1(Ω) of square-integrable functions on Ω whose partial derivatives defined in the distribution sense are square integrable, and AH(curl, Ω), the Sobolev space of vector fields consisting of square integrable vector fields with square integrable curl.

For a slightly smoother vector field uH(curl, Ω), a similar decomposition holds:

where φH1(Ω), v ∈ (H1(Ω))d.

Derivation from the Fourier transform

edit

Note that in the theorem stated here, we have imposed the condition that if is not defined on a bounded domain, then shall decay faster than . Thus, the Fourier transform of , denoted as , is guaranteed to exist. We apply the convention

The Fourier transform of a scalar field is a scalar field, and the Fourier transform of a vector field is a vector field of same dimension.

Now consider the following scalar and vector fields:

Hence

Longitudinal and transverse fields

edit

A terminology often used in physics refers to the curl-free component of a vector field as the longitudinal component and the divergence-free component as the transverse component.[22] This terminology comes from the following construction: Compute the three-dimensional Fourier transform of the vector field . Then decompose this field, at each point k, into two components, one of which points longitudinally, i.e. parallel to k, the other of which points in the transverse direction, i.e. perpendicular to k. So far, we have

Now we apply an inverse Fourier transform to each of these components. Using properties of Fourier transforms, we derive:

Since and ,

we can get

so this is indeed the Helmholtz decomposition.[23]

Generalization to higher dimensions

edit

Matrix approach

edit

The generalization to dimensions cannot be done with a vector potential, since the rotation operator and the cross product are defined (as vectors) only in three dimensions.

Let be a vector field on a bounded domain which decays faster than for and .

The scalar potential is defined similar to the three dimensional case as: where as the integration kernel is again the fundamental solution of Laplace's equation, but in d-dimensional space: with the volume of the d-dimensional unit balls and the gamma function.

For , is just equal to , yielding the same prefactor as above.The rotational potential is an antisymmetric matrix with the elements: Above the diagonal are entries which occur again mirrored at the diagonal, but with a negative sign.In the three-dimensional case, the matrix elements just correspond to the components of the vector potential .However, such a matrix potential can be written as a vector only in the three-dimensional case, because is valid only for .

As in the three-dimensional case, the gradient field is defined as The rotational field, on the other hand, is defined in the general case as the row divergence of the matrix: In three-dimensional space, this is equivalent to the rotation of the vector potential.[8][24]

Tensor approach

edit

In a -dimensional vector space with , can be replaced by the appropriate Green's function for the Laplacian, defined by where Einstein summation convention is used for the index . For example, in 2D.

Following the same steps as above, we can write where is the Kronecker delta (and the summation convention is again used). In place of the definition of the vector Laplacian used above, we now make use of an identity for the Levi-Civita symbol , which is valid in dimensions, where is a -component multi-index. This gives

We can therefore write where Note that the vector potential is replaced by a rank- tensor in dimensions.

Because is a function of only , one can replace , giving Integration by parts can then be used to give where is the boundary of . These expressions are analogous to those given above for three-dimensional space.

For a further generalization to manifolds, see the discussion of Hodge decomposition below.

Differential forms

edit

The Hodge decomposition is closely related to the Helmholtz decomposition,[25] generalizing from vector fields on R3 to differential forms on a Riemannian manifold M. Most formulations of the Hodge decomposition require M to be compact.[26] Since this is not true of R3, the Hodge decomposition theorem is not strictly a generalization of the Helmholtz theorem. However, the compactness restriction in the usual formulation of the Hodge decomposition can be replaced by suitable decay assumptions at infinity on the differential forms involved, giving a proper generalization of the Helmholtz theorem.

Extensions to fields not decaying at infinity

edit

Most textbooks only deal with vector fields decaying faster than with at infinity.[16][13][27] However, Otto Blumenthal showed in 1905 that an adapted integration kernel can be used to integrate fields decaying faster than with , which is substantially less strict.To achieve this, the kernel in the convolution integrals has to be replaced by .[28]With even more complex integration kernels, solutions can be found even for divergent functions that need not grow faster than polynomial.[12][13][24][29]

For all analytic vector fields that need not go to zero even at infinity, methods based on partial integration and the Cauchy formula for repeated integration[30] can be used to compute closed-form solutions of the rotation and scalar potentials, as in the case of multivariate polynomial, sine, cosine, and exponential functions.[8]

Uniqueness of the solution

edit

In general, the Helmholtz decomposition is not uniquely defined.A harmonic function is a function that satisfies .By adding to the scalar potential , a different Helmholtz decomposition can be obtained:

For vector fields , decaying at infinity, it is a plausible choice that scalar and rotation potentials also decay at infinity. Because is the only harmonic function with this property, which follows from Liouville's theorem, this guarantees the uniqueness of the gradient and rotation fields.[31]

This uniqueness does not apply to the potentials: In the three-dimensional case, the scalar and vector potential jointly have four components, whereas the vector field has only three. The vector field is invariant to gauge transformations and the choice of appropriate potentials known as gauge fixing is the subject of gauge theory. Important examples from physics are the Lorenz gauge condition and the Coulomb gauge. An alternative is to use the poloidal–toroidal decomposition.

Applications

edit

Electrodynamics

edit

The Helmholtz theorem is of particular interest in electrodynamics, since it can be used to write Maxwell's equations in the potential image and solve them more easily. The Helmholtz decomposition can be used to prove that, given electric current density and charge density, the electric field and the magnetic flux density can be determined. They are unique if the densities vanish at infinity and one assumes the same for the potentials.[16]

Fluid dynamics

edit

In fluid dynamics, the Helmholtz projection plays an important role, especially for the solvability theory of the Navier-Stokes equations. If the Helmholtz projection is applied to the linearized incompressible Navier-Stokes equations, the Stokes equation is obtained. This depends only on the velocity of the particles in the flow, but no longer on the static pressure, allowing the equation to be reduced to one unknown. However, both equations, the Stokes and linearized equations, are equivalent. The operator is called the Stokes operator.[32]

Dynamical systems theory

edit

In the theory of dynamical systems, Helmholtz decomposition can be used to determine "quasipotentials" as well as to compute Lyapunov functions in some cases.[33][34][35]

For some dynamical systems such as the Lorenz system (Edward N. Lorenz, 1963[36]), a simplified model for atmospheric convection, a closed-form expression of the Helmholtz decomposition can be obtained: The Helmholtz decomposition of , with the scalar potential is given as:

The quadratic scalar potential provides motion in the direction of the coordinate origin, which is responsible for the stable fix point for some parameter range. For other parameters, the rotation field ensures that a strange attractor is created, causing the model to exhibit a butterfly effect.[8][37]

Medical Imaging

edit

In magnetic resonance elastography, a variant of MR imaging where mechanical waves are used to probe the viscoelasticity of organs, the Helmholtz decomposition is sometimes used to separate the measured displacement fields into its shear component (divergence-free) and its compression component (curl-free).[38] In this way, the complex shear modulus can be calculated without contributions from compression waves.

Computer animation and robotics

edit

The Helmholtz decomposition is also used in the field of computer engineering. This includes robotics, image reconstruction but also computer animation, where the decomposition is used for realistic visualization of fluids or vector fields.[15][39]

See also

edit

Notes

edit

References

edit
  • George B. Arfken and Hans J. Weber, Mathematical Methods for Physicists, 4th edition, Academic Press: San Diego (1995) pp. 92–93
  • George B. Arfken and Hans J. Weber, Mathematical Methods for Physicists – International Edition, 6th edition, Academic Press: San Diego (2005) pp. 95–101
  • Rutherford Aris, Vectors, tensors, and the basic equations of fluid mechanics, Prentice-Hall (1962), OCLC 299650765, pp. 70–72