# Journal of Nonlinear Mathematical Physics

Volume 26, Issue 2, March 2019, Pages 214 - 227

# Delta shock waves in conservation laws with impulsive moving source: some results obtained by multiplying distributions

Authors
C.O.R. Sarrico
CMAFCIO, Faculdade de Ciências da Universidade de Lisboa, Campo Grande, 1749-016 Lisboa, Portugal,corsarrico@gmail.com
Received 17 September 2018, Accepted 20 November 2018, Available Online 6 January 2021.
DOI
10.1080/14029251.2019.1591718How to use a DOI?
Keywords
Products of distributions; conservation laws with impulsive source; Riemann problems; traveling shock waves; Delta waves; Delta shock waves
Abstract

The present paper concerns the study of a Riemann problem for the conservation law ut + [ϕ(u)]x = (xvt) where x, t, k, v and u = u(x, t) are real numbers. We consider ϕ an entire function taking real values on the real axis and δ stands for the Dirac measure. Within a convenient space of distributions we will explicitly see the possible emergence of waves with the shape of shock waves, delta waves and delta shock waves. For this purpose, we define a rigorous concept of a solution which extends both the classical solution concept and a weak solution concept. All this framework is developed in the setting of a distributional product that is not constructed by approximation. We include the main ideas of this product for the reader’s convenience. Recall that delta shock waves are relevant physical phenomena which may be interpreted as processes of concentration of mass or even as processes of formation of galaxies in the universe.

Open Access

## 1. Introduction and contents

Let us consider the Riemann problem

ut+[φ(u)]x=kδ(xvt),(1.1)
u(x,0)=u1+(u2u1)H(x),(1.2)
where x ∈ ℝ is the space variable, t ∈ ℝ is the time variable, u(x, t) ∈ ℝ is the unknown state variable, k, v, u1, u2 ∈ ℝ are given constants, H is the Heaviside function and δ stands for the Dirac measure supported at the origin. We suppose ϕ an entire function taking real values on the real axis. The goal is to evaluate the evolution of the profile (1.2) within the space W of distributions u defined by
u(x,t)=f(t)+g(t)H(xvt)+h(t)δ(xvt),
where f, g, h : ℝ → ℝ are C1-functions.

The main result (under certain conditions) is the solution,

u(x,t)=u1+(u2u1)H(xvt)+Atδ(xvt),(1.3)
where
A=k+v(u2u1)[φ(u2)φ(u1)].(1.4)

This solution, when it exists, is unique in W. Thus, the emergence of traveling shock waves (A = 0 and u1u2), delta waves (A ≠ 0 and u1 = u2) and delta shock waves (A ≠ 0 and u1u2) becomes possible.

An interesting result can be obtained as a particular case, supposing that there exists a solution for (1.1), (1.2), in W, with k = 0: in this case (as we will see) we have necessarily A = 0. This means that, in W, the usual conservation law

ut+[φ(u)]x=0,(1.5)
subjected to the initial condition (1.2) cannot develop neither delta waves nor delta shock waves; the impulse amplification (which concerns the term Atδ(xvt)) during the time evolution is impossible in this setting. This is a characteristic of the equation (1.5) that does not happen for systems of the same type; remember that Dirac measures amplified along time appear in the solution of Riemann problems for systems of conservation laws with source zero (see [46, 10, 11, 14]). Also recall that this amplification is a relevant physical phenomenon which may be interpreted as a process of concentration of mass or even as a process of formation of galaxies in the universe [16]. Thus, and for the problem (1.1), (1.2), the concentration of matter in the front shock during the time evolution is possible only when k ≠ 0. We will also prove that, the existence of an impulsive source does not cause necessarily the appearing of an impulse in the solution, that is, there exist cases, with k ≠ 0, where the solution contains only functions (without any Dirac measure present). This will be easily shown in an example concerning Burgers equation with an impulsive moving source.

Often, in this kind of problems, the distributional solutions appear as weak limits. It may even happen that those weak limits cannot be substituted into equations or systems owing to the well known difficulties of multiplying distributions. On top of that, limit processes involving sequences of continuous functions may not yield to mathematically consistent solutions (see [13], Section II). Our method overcome those difficulties, as we will explain.

For the equation (1.1), we will adopt a concept of solution defined within the setting of a distributional product. This concept is a consistent extension of the classical solution concept and, in a sense explained at the end of Section 5, can also be seen as a new type of weak solution.

In our framework, the product of two distributions is a distribution that depends on the choice of a certain function α encoding the indeterminacy inherent to such products. This indeterminacy generally is not avoidable and in many cases it also has a physical meaning; concerning this point let us mention [13, 8]. Thus, the solutions of differential equations containing such products may depend (or not) of α. We call such solutions α-solutions. Thus, when the solutions depend on α, the future behavior of the system cannot be fully predicted. This fact might be due to physical features omitted in the formulation of the model with the goal of simplifying it. It is worthwhile to stress that, for the present problem (1.1), (1.2), the solutions, when they exist, are independent of α!

The concept of α-solution has shown to be a convenient tool in the study of singular solutions of nonlinear PDEs and systems (see for instance, [812, 14, 15]). Also recently, Chun Shen and Meina Sun [17] used our framework to study the zero-pressure gas dynamic system with the Coulomb like friction term; they advocate that “it is more easy to use the method of α-solutions to discover discontinuous solutions involving Dirac-delta measures in many branches of engineering, physics and mechanics”.

Let us now summarize the present paper’s contents. In Section 2, we present the main ideas of our method for multiplying distributions. In Section 3, we define powers of certain distributions. In Section 4, we define the composition of an entire function with a distribution. In Section 5, we define the concept of α-solution for the equation (1.1). In Section 6, we present the main result, that is, all possible solutions of the Riemann problem (1.1), (1.2) that belong to W. Easy examples are given in Section 7.

## 2. The multiplication of distributions

Let 𝒞 be the space of indefinitely differentiable real or complex-valued functions defined on ℝN, N ∈ {1, 2, 3,...}, and 𝒟 the subspace of C consisting of those functions with compact support. Let 𝒟′ be the space of Schwartz distributions and L(𝒟) the space of continuous linear maps ϕ : 𝒟𝒟, where we suppose 𝒟 endowed with the usual topology. We will sketch the main ideas of our distributional product (the reader can look at (2.4), (2.7), and (2.9) as definitions, if he prefers to skip this presentation). For proofs and other details concerning this product see [7].

## The construction of a general product in 𝒟′

First, we define a product T ϕ𝒟′ for T𝒟′ and ϕL(𝒟) by

Tφ,ξ=T,φ(ξ),
for all ξ𝒟; this makes 𝒟′ a right L(𝒟)-module. Next, we define an epimorphism ζ˜:L(𝒟)𝒟, where the image of ϕ is the distribution ζ˜(φ) given by
ζ˜(φ),ξ=φ(ξ),
for all ξ𝒟 (in the present paper, all integrals are extended all over ℝN); given S𝒟′, we say that ϕ is a representative operator of S if ζ˜(φ)=S. For instance, if β𝒞 is seen as a distribution, the operator ϕβL(𝒟) defined by ϕβ (ξ) = βξ, for all ξ𝒟, is a representative operator of β because, for all ξ𝒟, we have
ζ˜(φβ),ξ=φβ(ξ)=βξ=β,ξ.

For this reason ζ˜(φβ)=β. If T𝒟′, we also have

Tφβ,ξ=T,φβ(ξ)=T,βξ=Tβ,ξ,
for all ξ𝒟. Hence,
Tβ=Tφβ.

Thus, given T, S𝒟′, we are tempted to define a natural product by setting TS := , where ϕL(𝒟) is a representative operator of S, i.e., ϕ is such that ζ˜(φ)=S. Unfortunately, this product is not well defined, because TS depends on the representative ϕL(𝒟) of S𝒟′.

This difficulty can be overcome, if we fix α𝒟 with

[(sαφ)(ξ)](y)=φ[(τyα̌)ξ],(2.1)
for all ξ𝒟 and all y ∈ ℝN, where τyα̌ is given by (τyα̌)(x)=α̌(xy)=α(yx) for all x ∈ ℝN. It can be proved that for each α𝒟 with ∫ α = 1, sα(ϕ) ∈ L(𝒟), sα is linear, sαsα = sα (sα is a projector of L(𝒟)), kersα=kerζ˜, and ζ˜sα=ζ˜.

Now, for each α𝒟, we can define a general α-product α of T𝒟′ with S𝒟′ by setting

TαS:=T(sαφ),(2.2)
where ϕL(𝒟) is a representative operator of S𝒟′. This α-product is independent of the representative ϕ of S, because if ϕ, ψ are such that ζ˜(φ)=ζ˜(ψ)=S, then φψkerζ˜=kersα. Hence,
T(sαφ)T(sαψ)=T[sα(φψ)]=0.

Since ϕ in (2.2) satisfies ζ˜(φ)=S, we have ∫ϕ(ξ) = 〈S, ξ〉 for all ξ𝒟, and by (2.1)

[(sαφ)(ξ)](y)=S,(τyα̌)ξ=Sξ,τyα̌=(Sξ*α)(y),
for all y ∈ ℝN, which means that (sαϕ)(ξ) = * α. Therefore, for all ξ𝒟,
TαS,ξ=T(sαφ),ξ=T,(sαφ)(ξ)=T,Sξ*α=[T*(Sξ*α)̌](0)=[(Sξ)̌*(T*α̌)](0)=(T*α̌)S,ξ,
and we obtain an easier formula for the general product (2.2),
TαS=(T*α̌)S.(2.3)

In general, this α-product is neither commutative nor associative but it is bilinear and satisfies the Leibniz rule written in the form

Dk(TαS)=(DkT)αS+Tα(DkS),
where Dk is the usual k-partial derivative operator in distributional sense (k = 1, 2,...,N).

Recall that the usual Schwartz products of distributions are not associative and the commutative property is a convention inherent to the definition of such products (see the classical monograph of Schwartz [18] pp. 117, 118, and 121, where these products are defined). Unfortunately, the α-product (2.3), in general, is not consistent with the classical Schwartz products of distributions with functions.

## How to get a product consistent with the Schwartz product of a distribution with a C∞-function?

In order to obtain consistency with the usual product of a distribution with a C-function, we are going to introduce some definitions and single out a certain subspace Hα of L(𝒟).

An operator ϕL(𝒟) is said to vanish on an open set Ω ⊂ ℝN, if and only if ϕ(ξ) = 0 for all ξ𝒟 with support contained in Ω. The support of an operator ϕL(𝒟) will be defined as the complement of the largest open set in which ϕ vanishes.

Let 𝒩 be the set of operators ϕL(𝒟) whose support has Lebesgue measure zero, and ρ(C) the set of operators ϕL(𝒟) defined by ϕ(ξ) = β ξ for all ξ𝒟, with βC. For each α𝒟, with ∫ α = 1, let us consider the space Hα = ρ(C) ⊕ sα(𝒩) ⊂ L(𝒟). It can be proved that ζα:=ζ˜|Hα:HαC𝒟μ is an isomorphism (𝒟′μ stands for the space of distributions whose support has Lebesgue measure zero). Therefore, if T𝒟′ and S = β + fC𝒟′μ, a new α-product, α˙, can be defined by Tα˙S:=Tφα, where for each α, φα=ζα1(s)Hα. Hence,

Tα˙S=Tζα1(S)=T[ζα1(β+f)]=T[ζα1(β)+ζα1(f)]=Tβ+Tαf=Tβ+(T*α̌)f,
and putting α instead of α̌ (to simplify), we get
Tα˙S=Tβ+(T*α)f.(2.4)

Thus, the referred consistency is obtained when the C-function is placed at the right-hand side: if SC, then f = 0, S = β, and Tα˙S=Tβ.

## How to obtain the consistency to all Schwartz products of D′p-distributions with Cp-functions?

The α-product (2.4) can be easily extended for T𝒟′p and S = β + fCp𝒟′μ, where p ∈ {0, 1, 2,...,∞}, 𝒟′p is the space of distributions of order ≤ p in the sense of Schwartz (𝒟′ means 𝒟′), is the Schwartz product of a 𝒟′p-distribution with a Cp-function, and (T * α) f is the usual product of a C-function with a distribution. This extension is clearly consistent with all Schwartz products of 𝒟′p-distributions with Cp-functions, if the Cp-functions are placed at the right-hand side. It also keeps the bilinearity and satisfies the Leibniz rule written in the form

Dk(Tα˙S)=(DkT)α˙S+Tα˙(DkS),
clearly under certain natural conditions; for T𝒟′p, we must suppose SCp+1𝒟′μ. Moreover, these products are invariant by translations, that is,
τa(Tα˙S)=(τaT)α˙(τaS),
where τa stands for the usual translation operator in distributional sense. These products are also invariant for the action of any group of linear transformations h : ℝN → ℝN with |deth| = 1, that leave α invariant.

Thus, for each α𝒟 with ∫ α = 1, formula (2.4) allows us to evaluate the product of TD′p with SCpD′μ; therefore, we have obtained a family of products, one for each α.

From now on, we always consider the dimension N = 1. For instance, if β is a continuous function we have for each α by applying (2.4),

δα˙β=δα˙(β+0)=δβ+(δ*α)0=β(0)δ,βα˙δ=βα˙(0+δ)=β0+(β*α)δ=[(β*α)(0)]δ,δα˙δ=δα˙(0+δ)=δ0+(δ*α)δ=αδ=α(0)δ,δα˙(Dδ)=(δ*α)Dδ=αDδ=α(0)Dδα(0)δ,(Dδ)α˙S=(Dδ*α)δ=αδ=α(0)δ,(2.5)
Hα˙δ=(H*α)δ=[+α(τ)H(τ)dτ]δ=(0α)δ.(2.6)

For each α, the support of the α-product (2.4) satisfies (Tα˙S)suppS, as for usual functions, but it may happen that (Tα˙S)suppT. For instance, if a, b ∈ ℝ, from (2.4) we have,

(τaδ)α˙(τbδ)=[(τaδ)*α](τbδ)=(τaα)(τbδ)=α(ba)(τbδ).

## Other products we need in the present paper

It is also possible to multiply many other distributions preserving the consistency with all Schwartz products of distributions with functions. For instance, using the Leibniz formula to extend the α-products, it is possible to write

Tα˙S=Tw+(T*α)f,(2.7)
with T𝒟′−1 and S=w+fLloc1𝒟μ, where 𝒟′−1 stands for the space of distributions T𝒟′ such that DT𝒟′0 and Tw is the usual pointwise product of T𝒟′−1 with wLloc1. Recall that, locally, T can be read as a function of bounded variation (see [9], Sec. 2 for details). For instance, since H𝒟′−1 and H=H+0Lloc1𝒟μ, we have
Hα˙H=HH+(H*α)0=H.(2.8)

More generally, if T𝒟′−1 and SLloc1, then Tα˙S=TS because by (2.7) we can write

Tα˙S=Tα˙(S+0)=TS+(T*α)0=TS.

Thus, in distributional sense, the α-products of functions that, locally, are of bounded variation coincide with the usual pointwise product of these functions considered as a distribution. We stress that in (2.4) or (2.7) the convolution T * α is not to be understood as an approximation of T. Those formulas are exact.

Another useful extension is given by the formula

Tα˙S=D(Yα˙S)Yα˙(DS),(2.9)
for T𝒟′0𝒟′μ and S, DSLloc1𝒟c, where 𝒟′c𝒟′μ is the space of distributions whose support is at most countable, and Y𝒟′−1 is such that DY = T (the products Yα˙S and Yα˙(DS) are supposed to be computed by (2.4) or (2.7)). The value of Tα˙S given by (2.9) is independent of the choice of Y𝒟′−1 such that DY = T (see [9] p. 1004 for the proof). For instance, by (2.6) and (2.9), we have for any α,
δα˙H=D(Hα˙H)Hα˙(DH)=DHHα˙δ=δ(0α)δ=(0+α)δ,(2.10)
so that Hα˙δ+δα˙H=δ for any α. The products (2.4), (2.7), and (2.9) are compatible, that is, if an α-product can be computed by two of them, the result is the same.

## 3. Powers of distributions

Let M𝒟′ be a set of distributions such that, if T1, T2M, then T1α˙T2 is well defined and T1α˙T2M. For each TM we define the α-power Tαn by the recurrence relation

Tαn=(Tαn1)α˙Tforn1,withTα0=1forT0;(3.1)
naturally, if 0 ∈ M, 0αn=0 for all n ≥ 1.

Since our distributional products are consistent with the Schwartz products of distributions with functions, when the functions are placed at the right-hand side, we have βαn=βn for all βC0M. Thus, this definition is consistent with the usual definition of powers of C0-functions. Moreover, if M is such that τaTM for all TM and all a ∈ ℝN, then we also have (τaT)αn=τa(Tαn).

Taking, for instance, M = Cp ⊕ (𝒟′p𝒟′μ) and supposing T1, T2M, we have T1 = β1 + f1, T2 = β2 + f2 and by (2.4), we can write

T1α˙T2=T1β2+(T1*α)f2=(β1+f1)β2+[(β1+f1)*α]f2=β1β2+f1β2+[(β1+f1)*α]f2M.

Therefore, we can define α-powers Tαn of distributions TCp ⊕ (𝒟′p𝒟′μ). For instance, if m ∈ ℂ\{0}, we have (mδ)α0=1, (mδ)α1=mδ, and for n ≥ 2, (mδ)αn=mn[α(0)]n1δ, as can be easily seen by induction applying (2.5).

Setting M = 𝒟′−1 and supposing T1, T2𝒟′−1, we have T1α˙T2𝒟1. Thus, we can also define α-powers Tαn of distributions T𝒟′−1 by the recurrence relation (3.1) and clearly we get,

Tαn=Tn,
that is, in distributional sense the α-powers of functions that, locally, are of bounded variation, coincide with the usual powers of these functions when considered as distributions.

In the sequel we will write, in all cases, Tn instead of Tαn, supposing α fixed. For instance, if m ∈ ℝ we will write ()1 = and for n ≥ 2, ()n = mn[α(0)]n−1δ.

Taking M = {a + (ba)H + : a, b, m ∈ ℝ} we have:

### Theorem 3.1.

Given α, let us suppose a, b, m ∈ ℝ, p=0α, q=0+α and λ = α(0)m + (ba)q. Then,

[a+(ba)H+mδ]n=an+(bnan)H+m[Pn1(a+λ)]δ,(3.2)
where Pn−1 is the polynomial defined by the recurrence relation P0(s) = 1 and for n ≥ 1, Pn(s) = sPn−1(s) + pbn + qan.

For a proof, see [12], p. 335.

## 4. Composition of entire functions with distributions

Let ϕ : ℂ → ℂ be an entire function. Then we have,

φ(s)=a0+a1s+a2s2+(4.1)
for the sequence an=φ(n)(0)n! of complex numbers and all s ∈ ℂ. If TM we define the composition ϕT by formula
φT=a0+a1T+a2T2+(4.2)
whenever this series converges in 𝒟′. Clearly, this definition is consistent with the usual meaning of ϕT, if TM is a function. Moreover, if M is such that τaTM for all TM and all a ∈ ℝ, we have τa(ϕT) = ϕ ○ (τaT), if ϕT or ϕ ○ (τaT) are well defined. Remember that, in general, ϕT depends on α. For instance, taking M = { : m ∈ ℂ} it is easy to see that
φ(mδ)={φ(0)+φ(0)mδifα(0)=0,φ(0)+φ(mα(0))φ(0)α(0)δifα(0)0.

We need the following statements:

### Lemma 4.1.

Let a, b ∈ ℂ and let p, q and the sequence Pn be defined as in theorem 3.1. Still suppose ϕ an entire function defined by (4.1). Then, the function Wϕ : ℂ → ℂ defined by Wφ(s)=n=1anPn1(s) is well defined and satisfies the following conditions:

1. (a)

Wϕ is an entire function;

2. (b)

Wφ(pa+qb)={φ(b)φ(a)baifba,φ(a)ifb=a;

3. (c)

Wϕ = 0 if and only if ϕ′ = 0.

### Theorem 4.1.

Given α, let a, b, m ∈ ℂ, q=0+α and λ = α(0)m + q(ba). Suppose also that T = a + (ba)H + mδ and ϕ is an entire function defined by (4.1). Then,

φT=φ(a)+[φ(b)φ(a)]H+mWφ(a+λ)δ,
where Wϕ is defined in lemma 4.1.

For the proofs see [12], Section 4.

## 5. The α-solution concept

Let I be an interval of ℝ with more that one point, and let (I) be the space of continuously differentiable maps ũ : I𝒟′ in the sense of the usual topology of 𝒟′. For tI, the notation [ũ(t)](x) is sometimes used for emphasizing that the distribution ũ(t) acts on functions ξ𝒟 depending on x.

Let Σ(I) be the space of functions u : ℝ × I → ℝ such that:

1. (a)

for each tI, u(x,t)Lloc1();

2. (b)

ũ : ID′, defined by [ũ(t)](x) = u(x, t) is in (I).

The natural injection uũ from Σ(I) into (I) identifies any function in Σ(I) with a certain map in (I). Since C1(ℝ×I) ⊂ Σ(I), we can write the inclusions

C1(×I)Σ(I)(I).

Thus, identifying u with ũ the equation (1.1) can be read as follows:

du˜dt(t)+D[φu˜(t)]=kτvtδ,(5.1)

### Definition 5.1.

Given α, the map ũ(I) will be called an α-solution of the equation (5.1) on I, if ϕũ(t) is well defined, and if this equation is satisfied for all tI.

This definition sees equation (1.1) as an evolution equation and we have the following results:

### Theorem 5.1.

If u(x, t) is a classical solution of (1.1) on ℝ × I then, for any α, the map ũ(I) defined by [ũ(t)](x) = u(x, t) is an α-solution of (5.1) on I.

Note that, by a classical solution of (1.1) on ℝ × I, we mean a C1-function u(x, t) that satisfies (1.1) on ℝ × I.

### Theorem 5.2.

If u: ℝ × I → ℝ is a C1-functions and, for a certain α, the map ũ(I) defined by [ũ(t)](x) = u(x, t) is an α-solution of (5.1) on I, then u(x, t) is a classical solution of (1.1) on ℝ × I.

For the proof, it is enough to observe that any C1-functions u(x, t) can be read as continuously differentiable function ũ(I) defined by [ũ(t)](x) = u(x, t) and to use the consistency of the α-products with the classical Schwartz products of distributions with functions.

### Definition 5.2.

Given α, any α-solutionũ of (5.1) on I, will be called an α-solution of the equation (1.1) on I.

As a consequence, an α-solution ũ in this sense, read as a usual distributional u, affords a general consistent extension of the concept of a classical solution for the equation (1.1). Thus, and for short, we also call the distribution u an α-solution of (1.1).

As it is well known, a weak solution of a differential equation is a function for which the derivatives may not exist but satisfies the equation in some precise sense.

One of the most important definitions is based in the classical theory of distributions. In this theory, the study of differential equations is, of course, restricted to linear equations, owing to the well known difficulties of multiplying distributions. The classical setting usually considers linear partial differential equations with C-coefficients, and a weak solution is generally defined as satisfying the equation in the sense of distributions.

Linear partial differential evolution equations in the unknown u can be re-interpreted as evolution equations with α-products, in the unknownũ(t), if the (re-interpreted) C-coefficients are placed at the right-hand side of ũ(t) and its derivatives. Actually, in this case, our α-products are consistent with the products of distributions with C-functions. Thus, if u ∈ Σ(I) is a weak solution, then, for any α, the corresponding map ũ(I) is an α-solution. Conversely, if u ∈ Σ(I) and for a certain α the corresponding map ũ(I) is an α-solution, then u is a weak solution. In this sense, the α-solution concept can be identified with the weak solution concept. Meanwhile, an advantage arises: the coefficients of such equations can now be considered as distributions, if the α-products involved are well defined and the solutions are considered as elements of (I).

Thus, in the framework of evolution equations, the α-solution concept is an extension of the classical solution concept, and may be also considered as a new type of weak solution provided by distribution theory in the nonlinear setting.

## 6. The Riemann problem (1.1), (1.2)

Let us consider the equation (1.1) with (x, t) ∈ ℝ × ℝ (we could also have considered (x, t) ∈ ℝ × [0, +∞[), ϕ an entire function taking real values on the real axis, and the unknown u subjected to the initial condition (1.2) with u1, u2 ∈ ℝ. When we read this problem in (ℝ) having in mind the identification uũ, we must replace the equation (1.1) by the equation (5.1) and the initial condition (1.2) by the following one

u˜(0)=u1+(u2u1)H.(6.1)

Theorem 6.1 concerns the α-solutions ũ of the problem (5.1), (6.1) in the time interval I = ℝ, which belong to a convenient space W˜(), defined the following way: u˜W˜ if and only if

u˜(t)=f(t)+g(t)τvtH+h(t)τvtδ,(6.2)
for certain C1-functions f, g, h: ℝ → ℝ and all t ∈ ℝ.

### Theorem 6.1.

Let A = k + v(u2u1) − [ϕ(u2) − ϕ(u1)]. Then, given α, the problem (5.1), (6.1) has α-solution u˜W˜ if and only if one of the following four conditions is satisfied:

1. (I)

A = 0;

2. (II)

A ≠ 0, ϕ′ = 0 and v = 0;

3. (III)

A ≠ 0, ϕ′ ≠ 0, u1 = u2 and v = ϕ′(u1);

4. (IV)

A ≠ 0, ϕ′ ≠ 0, u1u2 and v=φ(u2)φ(u1)u2u1.

In anyone of these four cases the α-solution is independent on α, is unique in W˜, and is given by

u˜(t)=u1+(u2u1)τvtH+Atτvtδ.(6.3)

### Proof.

Let us suppose u˜W˜. Then, from (6.2), we have, for each t,

du˜dt(t)=f(t)+g(t)τvtHvg(t)τvtδ+h(t)τvtδvh(t)τvtDδ=f(t)+g(t)τvtH+[h(t)vg(t)]τvtδvh(t)τvtDδ.(6.4)

On the other hand, from theorem 4.1 with a = f(t), b = f(t) + g(t) and m = h(t), we have, for each t,

φu˜(t)=φ[f(t)+g(t)τvtH+h(t)τvtδ]=τvt{φ[f(t)+g(t)H+h(t)δ]=τvt{φ[f(t)]+[φ(f(t)+g(t))φ(f(t)]H+h(t)Wφ[f(t)+α(0)h(t)+qg(t)]δ}=φ[f(t)]+[φ(f(t)+g(t))φ(f(t)]τvtH+h(t)Wφ[f(t)+α(0)h(t)+qg(t)]τvtδ,D[φu˜(t)]=[φ(f(t)+g(t))φ(f(t)]τvtδ+h(t)Wφ[f(t)+α(0)h(t)+qg(t)]τvtDδ.(6.5)

Thus, using (6.4) and (6.5), (5.1) turns out to be

f(t)+g(t)τvtH+[h(t)vg(t)+φ(f(t)+g(t))φ(f(t))k]τvtδ+h(t){Wφ[f(t)+α(0)h(t)+qg(t)]v}τvtDδ=0.

Therefore, (6.2) is an α-solution of (5.1) if and only if, for each t, the following four equations are satisfied

f(t)=0,g(t)=0,(6.6)
h(t)=k+vg(t)φ(f(t)+g(t))+φ(f(t)),(6.7)
h(t){Wφ[f(t)+α(0)h(t)+qg(t)]v}=0.(6.8)

Also from (6.1) and (6.2) we can write

f(0)+g(0)H+h(0)δ=u1+(u2u1)H,
and f(0) = u1, g(0) = u2u1 and h(0) = 0 follows. Thus, applying (6.6), (6.7) and (6.8) we conclude that (6.2) satisfies the problem (5.1), (6.1) if and only if for each t, the following four equations are satisfied
f(t)=u1,g(t)=u2u1,(6.9)
h(t)=k+v(u2u1)[φ(u2)φ(u1)]=A,(6.10)
h(t){Wφ[pu1+qu2+α(0)h(t)]v}=0.(6.11)

From (6.10) we have h(t) = At, and so, (6.2) is an α-solution of (5.1), (6.1) if and only if, for all t,

At{Wφ[pu1+qu2+α(0)At]v}=0.(6.12)
1. (a)

If A = 0, (6.12) is satisfied and (I) follows.

2. (b)

If A ≠ 0 and ϕ′ = 0, then, by Lemma 4.1 (c) with a = u1 and b = u2, Wϕ = 0 and (6.12) is satisfied if and only if v = 0. Hence, (II) follows.

3. (c)

If A ≠ 0, and ϕ′ ≠ 0, (6.12) is satisfied if and only if, for all t ≠ 0,

Wφ[pu1+qu2+α(0)At]=v.(6.13)

Let us suppose α(0) = 0. Then by Lemma 4.1 (b), (6.13) is satisfied if and only if

v={φ(u2)φ(u1)u2u1ifu2u1,φ(u1)ifu2=u1.(6.14)

Thus, if α(0) = 0, (III) and (IV) follows.

Let us suppose α(0) ≠ 0. Then, from (6.13) we have necessarily

Wφ[pu1+qu2+α(0)At]α(0)A=0,
which means that, for all t ≠ 0,
Wφ[pu1+qu2+α(0)At]=0.

Then, we conclude that W′ϕ = 0: actually, if W′ϕ ≠ 0, since W′ϕ is an entire function (see Lemma 4.1 (a)), for each t ≠ 0 the number pu1 + qu2 + α(0)At would be a zero of W′ϕ, which is impossible because the zeros of an entire function that does not vanishes are isolated points.

As a consequence, Wϕ is a constant function and applying Lemma 4.1 (b), we have

Wφ=Wφ(pu1+qu2)={φ(u2)φ(u1)u2u1ifu1u2,φ(u1)ifu1=u2.

Therefore, (6.13) turns out to be (6.14).

Thus, if α(0) ≠ 0, (III) and (IV) follows again.

4. (d)

Clearly, in all cases, the α-solution (6.3) is unique in W˜ and independent on α.

5. As a consequence of definition 5.2, we can say that the problem (1.1), (1.2) has an unique α-solution within W, independent on α and given by (1.3).

### Corollary 6.1.

If the Riemann problem (5.1), (6.1) with k = 0 has an α-solution u˜W˜, then A = 0.

### Proof

Suppose k = 0. Then A = v(u2u1) − [ϕ(u2) − ϕ(u1)], and if u1 = u2, A = 0 follows. If u1u2, the following cases of theorem 6.1 cannot be applied:

• case (II), because if ϕ′ = 0 and v = 0 then ϕ is a constant function and A = 0 follows;

• case (III); because, in this case, u1 = u2;

• case (IV); because if v=φ(u2)φ(u1)u2u1, then A = 0 follows.

Thus, by theorem 6.1, only case (I) can be applied and A = 0 follows again.

As a consequence of definition 5.2, we can say that if the Riemann problem (1.5), (1.2) has an α-solution in W, this α-solution is given by the constant state u(x, t) = u1, if u1 = u2, or by the travelling shock wave u(x, t) = u1 + (u2u1)H(xvt), if u1u2. Thus, as we said in the introduction, the Riemann problem (1.5), (1.2) cannot develop nor delta waves neither delta shock waves in W.

## 7. Examples

1. (a)

Let us consider the following Riemann problem

ut+(u22)x=δ(xvt),(7.1)
u(x,0)=1H(x).

Since φ(u)=u22, k = 1, u1 = 1 and u2 = 0, by theorem 6.1 it follows A=32v, and only cases (I) and (IV) of this theorem can be applied. Hence, this problem has an α-solution in W if and only if v=32 or v=12.

If v=32, then A = 0 and the unique α-solution in W is the travelling shock wave

u(x,t)=1H(x32t),(7.2)
propagating with speed 32. Thus, as we said in the introduction, (7.2) shows that the impulsive source does not necessarily imply the appearing of an impulse in the solution. Also recalling that the unique solution of the problem ut+(u22)x=0, u(x, 0) = 1 − H(x) is the travelling wave
u(x,t)=1H(x12t),(7.3)
it is interesting to note that, among all impulse sources of the form δ(xvt), only the one with speed v=32 is able to modify the speed of the travelling wave (7.3).

If v=12, we have A = 1 and the unique α-solution in W is the delta shock wave

u(x,t)=1H(x12t)+tδ(x12t).(7.4)

2. (b)

For the problem

ut+(u22)x=δ(xvt),u(x,0)=1,
since u1 = u2 = 1, we have A = 1 and only case (III) of theorem 6.1 can be applied. We conclude that this problem has an α-solution in W if and only if v = 1; this α-solution is unique, independent on α, and is given by the delta wave
u(x,t)=1+tδ(xt).(7.5)

3. (c)

For the problem

ut+(u22+u)x=δ(xvt),u(x,0)=0,
we have φ(u)=u22+u, k = 1, u1 = u2 = 0, A = 1. Thus, this problem has an α-solution in W if and only if v = ϕ′(0) = 1. This α-solution is unique in W, is independent of α, and is given by
u(x,t)=tδ(xt).(7.6)

Final comment: Interpreting u as a density of mass, any of the solutions (7.4), (7.5) or (7.6) shows concentration of matter in a moving point. The more interesting one is perhaps the solution (7.6) where this concentration is obtained from a vanishing initial condition! Such concentrations of matter in a single point may be associated to the formation of galaxies in one-dimensional models of universe. Recall that the Burgers equation can be seen as a drastic simplification of the Navier-Stokes equation.

## Acknowledgments

The present reserch was supported by Natural Funding from FCT – Fundação para a Ciência e a Tecnologia, under the project UID/MAT/04561/2019.

## References

[1]A. Bressan and F. Rampazzo, On differential systems with vector valued impulsive controls, Bull. Un. Mat. Ital, Vol. 2B, No. 7, 1988, pp. 641-656.
[3]G. Dal Maso, P. LeFloch, and F. Murat, Definitions and weak stability of nonconservative products, J. Math. Pure Appl, Vol. 74, 1995, pp. 483-548.
[7]C.O.R. Sarrico, About a family of distributional products important in the applications, Port. Math, Vol. 45, 1988, pp. 295-316.
[9]C.O.R. Sarrico, The multiplication of distributions and the Tsodyks model of synapses dynamics, Int. J. Math. Anal, Vol. 6, No. 21, 2012, pp. 999-1014.
[18]L. Schwartz, Théorie des distributions, Hermann, Paris, 1965.
Journal
Journal of Nonlinear Mathematical Physics
Volume-Issue
26 - 2
Pages
214 - 227
Publication Date
2021/01/06
ISSN (Online)
1776-0852
ISSN (Print)
1402-9251
DOI
10.1080/14029251.2019.1591718How to use a DOI?
Open Access

TY  - JOUR
AU  - C.O.R. Sarrico
PY  - 2021
DA  - 2021/01/06
TI  - Delta shock waves in conservation laws with impulsive moving source: some results obtained by multiplying distributions
JO  - Journal of Nonlinear Mathematical Physics
SP  - 214
EP  - 227
VL  - 26
IS  - 2
SN  - 1776-0852
UR  - https://doi.org/10.1080/14029251.2019.1591718
DO  - 10.1080/14029251.2019.1591718
ID  - Sarrico2021
ER  -